Test Report: Docker_Linux 21656

                    
                      8fdbaae537091671bd14dcf95cc23073d72e85b2:2025-09-29:41680
                    
                

Test fail (10/346)

x
+
TestAddons/serial/Volcano (374.29s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 16.136445ms
addons_test.go:868: volcano-scheduler stabilized in 16.209256ms
addons_test.go:876: volcano-admission stabilized in 16.285921ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-kbgrj" [6a525f26-530a-4629-9a2c-fd4fb60b1c94] Pending / Ready:ContainersNotReady (containers with unready status: [volcano-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [volcano-scheduler])
addons_test.go:890: ***** TestAddons/serial/Volcano: pod "app=volcano-scheduler" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:890: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-529420 -n addons-529420
addons_test.go:890: TestAddons/serial/Volcano: showing logs for failed pods as of 2025-09-29 10:44:38.686346178 +0000 UTC m=+532.522264990
addons_test.go:890: (dbg) Run:  kubectl --context addons-529420 describe po volcano-scheduler-76c996c8bf-kbgrj -n volcano-system
addons_test.go:890: (dbg) kubectl --context addons-529420 describe po volcano-scheduler-76c996c8bf-kbgrj -n volcano-system:
Name:                 volcano-scheduler-76c996c8bf-kbgrj
Namespace:            volcano-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Service Account:      volcano-scheduler
Node:                 addons-529420/192.168.49.2
Start Time:           Mon, 29 Sep 2025 10:36:56 +0000
Labels:               app=volcano-scheduler
pod-template-hash=76c996c8bf
Annotations:          <none>
Status:               Pending
SeccompProfile:       RuntimeDefault
IP:                   10.244.0.16
IPs:
IP:           10.244.0.16
Controlled By:  ReplicaSet/volcano-scheduler-76c996c8bf
Containers:
volcano-scheduler:
Container ID:  
Image:         docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34
Image ID:      
Port:          <none>
Host Port:     <none>
Args:
--logtostderr
--scheduler-conf=/volcano.scheduler/volcano-scheduler.conf
--enable-healthz=true
--enable-metrics=true
--leader-elect=false
--kube-api-qps=2000
--kube-api-burst=2000
--schedule-period=1s
--node-worker-threads=20
-v=3
2>&1
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
DEBUG_SOCKET_DIR:  /tmp/klog-socks
Mounts:
/tmp/klog-socks from klog-sock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h7w57 (ro)
/volcano.scheduler from scheduler-config (rw)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
scheduler-config:
Type:      ConfigMap (a volume populated by a ConfigMap)
Name:      volcano-scheduler-configmap
Optional:  false
klog-sock:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-h7w57:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  7m42s                 default-scheduler  Successfully assigned volcano-system/volcano-scheduler-76c996c8bf-kbgrj to addons-529420
Normal   Pulling    4m1s (x5 over 7m41s)  kubelet            Pulling image "docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34"
Warning  Failed     4m (x5 over 7m2s)     kubelet            Failed to pull image "docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     4m (x5 over 7m2s)     kubelet            Error: ErrImagePull
Warning  Failed     2m (x20 over 7m2s)    kubelet            Error: ImagePullBackOff
Normal   BackOff    107s (x21 over 7m2s)  kubelet            Back-off pulling image "docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34"
addons_test.go:890: (dbg) Run:  kubectl --context addons-529420 logs volcano-scheduler-76c996c8bf-kbgrj -n volcano-system
addons_test.go:890: (dbg) Non-zero exit: kubectl --context addons-529420 logs volcano-scheduler-76c996c8bf-kbgrj -n volcano-system: exit status 1 (77.335923ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "volcano-scheduler" in pod "volcano-scheduler-76c996c8bf-kbgrj" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:890: kubectl --context addons-529420 logs volcano-scheduler-76c996c8bf-kbgrj -n volcano-system: exit status 1
addons_test.go:891: failed waiting for app=volcano-scheduler pod: app=volcano-scheduler within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/serial/Volcano]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-529420
helpers_test.go:243: (dbg) docker inspect addons-529420:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c119ca8ea69036edc3cadb6923f83c5ce0ad278d98abd07121e74c6603626b70",
	        "Created": "2025-09-29T10:36:32.534326886Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 809726,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T10:36:32.572013527Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/c119ca8ea69036edc3cadb6923f83c5ce0ad278d98abd07121e74c6603626b70/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c119ca8ea69036edc3cadb6923f83c5ce0ad278d98abd07121e74c6603626b70/hostname",
	        "HostsPath": "/var/lib/docker/containers/c119ca8ea69036edc3cadb6923f83c5ce0ad278d98abd07121e74c6603626b70/hosts",
	        "LogPath": "/var/lib/docker/containers/c119ca8ea69036edc3cadb6923f83c5ce0ad278d98abd07121e74c6603626b70/c119ca8ea69036edc3cadb6923f83c5ce0ad278d98abd07121e74c6603626b70-json.log",
	        "Name": "/addons-529420",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-529420:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-529420",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c119ca8ea69036edc3cadb6923f83c5ce0ad278d98abd07121e74c6603626b70",
	                "LowerDir": "/var/lib/docker/overlay2/6a6aa4b945b92dc7997627af83b36f77536ae206c022f4b5596a3201308a5d21-init/diff:/var/lib/docker/overlay2/8785ead96546661033f04e32798b1ac8b184b1c04fc8862515b87d56909b6b9d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6a6aa4b945b92dc7997627af83b36f77536ae206c022f4b5596a3201308a5d21/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6a6aa4b945b92dc7997627af83b36f77536ae206c022f4b5596a3201308a5d21/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6a6aa4b945b92dc7997627af83b36f77536ae206c022f4b5596a3201308a5d21/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-529420",
	                "Source": "/var/lib/docker/volumes/addons-529420/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-529420",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-529420",
	                "name.minikube.sigs.k8s.io": "addons-529420",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "198f81b793e83c57ca2854cf8d1d0464141313e0495f23d0367891bbb8b05aa2",
	            "SandboxKey": "/var/run/docker/netns/198f81b793e8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33519"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33520"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33523"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33521"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33522"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-529420": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:86:fe:b5:4c:80",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3724749a501df880fce098abbef40c1fc5e7d4b0ca633d7b7b33237babda66c6",
	                    "EndpointID": "381be5dcde6efd6127536cd89de924bde7f3b532aa3b84038bdaf13a91eea4d0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-529420",
	                        "c119ca8ea690"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-529420 -n addons-529420
helpers_test.go:252: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-529420 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-529420 logs -n 25: (1.066813212s)
helpers_test.go:260: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                    ARGS                                                                                                                                                                                                                                    │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-079557 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                              │ download-only-079557   │ jenkins │ v1.37.0 │ 29 Sep 25 10:35 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ minikube               │ jenkins │ v1.37.0 │ 29 Sep 25 10:35 UTC │ 29 Sep 25 10:35 UTC │
	│ delete  │ -p download-only-079557                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-079557   │ jenkins │ v1.37.0 │ 29 Sep 25 10:35 UTC │ 29 Sep 25 10:35 UTC │
	│ start   │ -o=json --download-only -p download-only-239254 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                              │ download-only-239254   │ jenkins │ v1.37.0 │ 29 Sep 25 10:35 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ minikube               │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │ 29 Sep 25 10:36 UTC │
	│ delete  │ -p download-only-239254                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-239254   │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │ 29 Sep 25 10:36 UTC │
	│ delete  │ -p download-only-079557                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-079557   │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │ 29 Sep 25 10:36 UTC │
	│ delete  │ -p download-only-239254                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-239254   │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │ 29 Sep 25 10:36 UTC │
	│ start   │ --download-only -p download-docker-537454 --alsologtostderr --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-537454 │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │                     │
	│ delete  │ -p download-docker-537454                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-docker-537454 │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │ 29 Sep 25 10:36 UTC │
	│ start   │ --download-only -p binary-mirror-606832 --alsologtostderr --binary-mirror http://127.0.0.1:41679 --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                               │ binary-mirror-606832   │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │                     │
	│ delete  │ -p binary-mirror-606832                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ binary-mirror-606832   │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │ 29 Sep 25 10:36 UTC │
	│ addons  │ enable dashboard -p addons-529420                                                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │                     │
	│ addons  │ disable dashboard -p addons-529420                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │                     │
	│ start   │ -p addons-529420 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │ 29 Sep 25 10:38 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 10:36:12
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 10:36:12.344032  809094 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:36:12.344307  809094 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:36:12.344317  809094 out.go:374] Setting ErrFile to fd 2...
	I0929 10:36:12.344322  809094 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:36:12.344556  809094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-804126/.minikube/bin
	I0929 10:36:12.345145  809094 out.go:368] Setting JSON to false
	I0929 10:36:12.346064  809094 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":8313,"bootTime":1759133859,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:36:12.346178  809094 start.go:140] virtualization: kvm guest
	I0929 10:36:12.347964  809094 out.go:179] * [addons-529420] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 10:36:12.349153  809094 out.go:179]   - MINIKUBE_LOCATION=21656
	I0929 10:36:12.349158  809094 notify.go:220] Checking for updates...
	I0929 10:36:12.351412  809094 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:36:12.352567  809094 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21656-804126/kubeconfig
	I0929 10:36:12.353660  809094 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-804126/.minikube
	I0929 10:36:12.354776  809094 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 10:36:12.356142  809094 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 10:36:12.357739  809094 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:36:12.381211  809094 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 10:36:12.381322  809094 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:36:12.437520  809094 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-29 10:36:12.425260973 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:36:12.437676  809094 docker.go:318] overlay module found
	I0929 10:36:12.439238  809094 out.go:179] * Using the docker driver based on user configuration
	I0929 10:36:12.440251  809094 start.go:304] selected driver: docker
	I0929 10:36:12.440267  809094 start.go:924] validating driver "docker" against <nil>
	I0929 10:36:12.440280  809094 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 10:36:12.440926  809094 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:36:12.500513  809094 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-29 10:36:12.490511662 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:36:12.500726  809094 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 10:36:12.500930  809094 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 10:36:12.502519  809094 out.go:179] * Using Docker driver with root privileges
	I0929 10:36:12.503551  809094 cni.go:84] Creating CNI manager for ""
	I0929 10:36:12.503623  809094 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 10:36:12.503635  809094 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0929 10:36:12.503733  809094 start.go:348] cluster config:
	{Name:addons-529420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-529420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netwo
rkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1
m0s}
	I0929 10:36:12.504920  809094 out.go:179] * Starting "addons-529420" primary control-plane node in "addons-529420" cluster
	I0929 10:36:12.505901  809094 cache.go:123] Beginning downloading kic base image for docker with docker
	I0929 10:36:12.506980  809094 out.go:179] * Pulling base image v0.0.48 ...
	I0929 10:36:12.508072  809094 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 10:36:12.508109  809094 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 10:36:12.508127  809094 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21656-804126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0929 10:36:12.508138  809094 cache.go:58] Caching tarball of preloaded images
	I0929 10:36:12.508233  809094 preload.go:172] Found /home/jenkins/minikube-integration/21656-804126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0929 10:36:12.508247  809094 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0929 10:36:12.508686  809094 profile.go:143] Saving config to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/config.json ...
	I0929 10:36:12.508722  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/config.json: {Name:mkf2bdb1c1bab8c96c543722d15e10cff077f479 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:12.525703  809094 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 10:36:12.525838  809094 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0929 10:36:12.525859  809094 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0929 10:36:12.525863  809094 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0929 10:36:12.525870  809094 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0929 10:36:12.525876  809094 cache.go:165] Loading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from local cache
	I0929 10:36:25.658111  809094 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from cached tarball
	I0929 10:36:25.658161  809094 cache.go:232] Successfully downloaded all kic artifacts
	I0929 10:36:25.658213  809094 start.go:360] acquireMachinesLock for addons-529420: {Name:mkb1d52da565bdfba9b5d34f2b7f2b7e0a10b5f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 10:36:25.658335  809094 start.go:364] duration metric: took 99.814µs to acquireMachinesLock for "addons-529420"
	I0929 10:36:25.658362  809094 start.go:93] Provisioning new machine with config: &{Name:addons-529420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-529420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 10:36:25.658452  809094 start.go:125] createHost starting for "" (driver="docker")
	I0929 10:36:25.660487  809094 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0929 10:36:25.660770  809094 start.go:159] libmachine.API.Create for "addons-529420" (driver="docker")
	I0929 10:36:25.660815  809094 client.go:168] LocalClient.Create starting
	I0929 10:36:25.660956  809094 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca.pem
	I0929 10:36:25.875289  809094 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/cert.pem
	I0929 10:36:26.406689  809094 cli_runner.go:164] Run: docker network inspect addons-529420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0929 10:36:26.424628  809094 cli_runner.go:211] docker network inspect addons-529420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0929 10:36:26.424709  809094 network_create.go:284] running [docker network inspect addons-529420] to gather additional debugging logs...
	I0929 10:36:26.424730  809094 cli_runner.go:164] Run: docker network inspect addons-529420
	W0929 10:36:26.443146  809094 cli_runner.go:211] docker network inspect addons-529420 returned with exit code 1
	I0929 10:36:26.443184  809094 network_create.go:287] error running [docker network inspect addons-529420]: docker network inspect addons-529420: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-529420 not found
	I0929 10:36:26.443196  809094 network_create.go:289] output of [docker network inspect addons-529420]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-529420 not found
	
	** /stderr **
	I0929 10:36:26.443292  809094 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 10:36:26.462904  809094 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001bea4c0}
	I0929 10:36:26.462945  809094 network_create.go:124] attempt to create docker network addons-529420 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0929 10:36:26.462999  809094 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-529420 addons-529420
	I0929 10:36:26.525434  809094 network_create.go:108] docker network addons-529420 192.168.49.0/24 created
	I0929 10:36:26.525484  809094 kic.go:121] calculated static IP "192.168.49.2" for the "addons-529420" container
	I0929 10:36:26.525595  809094 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0929 10:36:26.543028  809094 cli_runner.go:164] Run: docker volume create addons-529420 --label name.minikube.sigs.k8s.io=addons-529420 --label created_by.minikube.sigs.k8s.io=true
	I0929 10:36:26.562375  809094 oci.go:103] Successfully created a docker volume addons-529420
	I0929 10:36:26.562495  809094 cli_runner.go:164] Run: docker run --rm --name addons-529420-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-529420 --entrypoint /usr/bin/test -v addons-529420:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0929 10:36:28.727983  809094 cli_runner.go:217] Completed: docker run --rm --name addons-529420-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-529420 --entrypoint /usr/bin/test -v addons-529420:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib: (2.165422344s)
	I0929 10:36:28.728035  809094 oci.go:107] Successfully prepared a docker volume addons-529420
	I0929 10:36:28.728077  809094 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 10:36:28.728109  809094 kic.go:194] Starting extracting preloaded images to volume ...
	I0929 10:36:28.728393  809094 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21656-804126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-529420:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0929 10:36:32.463008  809094 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21656-804126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-529420:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.734545448s)
	I0929 10:36:32.463043  809094 kic.go:203] duration metric: took 3.734930316s to extract preloaded images to volume ...
	W0929 10:36:32.463138  809094 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0929 10:36:32.463171  809094 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0929 10:36:32.463213  809094 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0929 10:36:32.517633  809094 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-529420 --name addons-529420 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-529420 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-529420 --network addons-529420 --ip 192.168.49.2 --volume addons-529420:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0929 10:36:32.782605  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Running}}
	I0929 10:36:32.802682  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:32.821845  809094 cli_runner.go:164] Run: docker exec addons-529420 stat /var/lib/dpkg/alternatives/iptables
	I0929 10:36:32.871577  809094 oci.go:144] the created container "addons-529420" has a running status.
	I0929 10:36:32.871612  809094 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa...
	I0929 10:36:33.119788  809094 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0929 10:36:33.151014  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:33.170919  809094 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0929 10:36:33.170942  809094 kic_runner.go:114] Args: [docker exec --privileged addons-529420 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0929 10:36:33.215532  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:33.236236  809094 machine.go:93] provisionDockerMachine start ...
	I0929 10:36:33.236361  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:33.258174  809094 main.go:141] libmachine: Using SSH client type: native
	I0929 10:36:33.258466  809094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33519 <nil> <nil>}
	I0929 10:36:33.258486  809094 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 10:36:33.399100  809094 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-529420
	
	I0929 10:36:33.399136  809094 ubuntu.go:182] provisioning hostname "addons-529420"
	I0929 10:36:33.399203  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:33.418829  809094 main.go:141] libmachine: Using SSH client type: native
	I0929 10:36:33.419101  809094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33519 <nil> <nil>}
	I0929 10:36:33.419124  809094 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-529420 && echo "addons-529420" | sudo tee /etc/hostname
	I0929 10:36:33.570695  809094 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-529420
	
	I0929 10:36:33.570813  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:33.590267  809094 main.go:141] libmachine: Using SSH client type: native
	I0929 10:36:33.590566  809094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33519 <nil> <nil>}
	I0929 10:36:33.590590  809094 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-529420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-529420/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-529420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 10:36:33.728831  809094 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 10:36:33.728862  809094 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21656-804126/.minikube CaCertPath:/home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21656-804126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21656-804126/.minikube}
	I0929 10:36:33.728905  809094 ubuntu.go:190] setting up certificates
	I0929 10:36:33.728918  809094 provision.go:84] configureAuth start
	I0929 10:36:33.728972  809094 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-529420
	I0929 10:36:33.748261  809094 provision.go:143] copyHostCerts
	I0929 10:36:33.748352  809094 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21656-804126/.minikube/ca.pem (1082 bytes)
	I0929 10:36:33.748507  809094 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21656-804126/.minikube/cert.pem (1123 bytes)
	I0929 10:36:33.748577  809094 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21656-804126/.minikube/key.pem (1679 bytes)
	I0929 10:36:33.748631  809094 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21656-804126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca-key.pem org=jenkins.addons-529420 san=[127.0.0.1 192.168.49.2 addons-529420 localhost minikube]
	I0929 10:36:33.869211  809094 provision.go:177] copyRemoteCerts
	I0929 10:36:33.869279  809094 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 10:36:33.869324  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:33.888791  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:33.988019  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 10:36:34.017500  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 10:36:34.045429  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 10:36:34.073342  809094 provision.go:87] duration metric: took 344.406928ms to configureAuth
	I0929 10:36:34.073378  809094 ubuntu.go:206] setting minikube options for container-runtime
	I0929 10:36:34.073606  809094 config.go:182] Loaded profile config "addons-529420": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 10:36:34.073667  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:34.093622  809094 main.go:141] libmachine: Using SSH client type: native
	I0929 10:36:34.093924  809094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33519 <nil> <nil>}
	I0929 10:36:34.093946  809094 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0929 10:36:34.233714  809094 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0929 10:36:34.233744  809094 ubuntu.go:71] root file system type: overlay
	I0929 10:36:34.233863  809094 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0929 10:36:34.233924  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:34.253864  809094 main.go:141] libmachine: Using SSH client type: native
	I0929 10:36:34.254082  809094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33519 <nil> <nil>}
	I0929 10:36:34.254160  809094 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0929 10:36:34.407267  809094 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0929 10:36:34.407373  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:34.426330  809094 main.go:141] libmachine: Using SSH client type: native
	I0929 10:36:34.426664  809094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33519 <nil> <nil>}
	I0929 10:36:34.426694  809094 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0929 10:36:35.600389  809094 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-29 10:36:34.405290761 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0929 10:36:35.600442  809094 machine.go:96] duration metric: took 2.364149235s to provisionDockerMachine
	I0929 10:36:35.600461  809094 client.go:171] duration metric: took 9.939636569s to LocalClient.Create
	I0929 10:36:35.600487  809094 start.go:167] duration metric: took 9.939718481s to libmachine.API.Create "addons-529420"
	I0929 10:36:35.600498  809094 start.go:293] postStartSetup for "addons-529420" (driver="docker")
	I0929 10:36:35.600507  809094 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 10:36:35.600574  809094 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 10:36:35.600613  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:35.620381  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:35.722159  809094 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 10:36:35.726032  809094 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 10:36:35.726071  809094 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 10:36:35.726079  809094 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 10:36:35.726086  809094 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 10:36:35.726099  809094 filesync.go:126] Scanning /home/jenkins/minikube-integration/21656-804126/.minikube/addons for local assets ...
	I0929 10:36:35.726162  809094 filesync.go:126] Scanning /home/jenkins/minikube-integration/21656-804126/.minikube/files for local assets ...
	I0929 10:36:35.726189  809094 start.go:296] duration metric: took 125.685891ms for postStartSetup
	I0929 10:36:35.726542  809094 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-529420
	I0929 10:36:35.746842  809094 profile.go:143] Saving config to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/config.json ...
	I0929 10:36:35.747139  809094 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 10:36:35.747190  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:35.766493  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:35.861001  809094 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 10:36:35.865987  809094 start.go:128] duration metric: took 10.207507252s to createHost
	I0929 10:36:35.866014  809094 start.go:83] releasing machines lock for "addons-529420", held for 10.207665879s
	I0929 10:36:35.866077  809094 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-529420
	I0929 10:36:35.884982  809094 ssh_runner.go:195] Run: cat /version.json
	I0929 10:36:35.885044  809094 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 10:36:35.885050  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:35.885102  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:35.904087  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:35.905861  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:36.079122  809094 ssh_runner.go:195] Run: systemctl --version
	I0929 10:36:36.084194  809094 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 10:36:36.089268  809094 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 10:36:36.123533  809094 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 10:36:36.123613  809094 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 10:36:36.154267  809094 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0929 10:36:36.154304  809094 start.go:495] detecting cgroup driver to use...
	I0929 10:36:36.154338  809094 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 10:36:36.154522  809094 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 10:36:36.173730  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 10:36:36.186991  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 10:36:36.198504  809094 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0929 10:36:36.198583  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0929 10:36:36.210007  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 10:36:36.221306  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 10:36:36.232714  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 10:36:36.244397  809094 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 10:36:36.255307  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 10:36:36.266876  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 10:36:36.278201  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 10:36:36.289714  809094 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 10:36:36.299414  809094 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 10:36:36.309736  809094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:36:36.379690  809094 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 10:36:36.458212  809094 start.go:495] detecting cgroup driver to use...
	I0929 10:36:36.458261  809094 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 10:36:36.458313  809094 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0929 10:36:36.471829  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 10:36:36.484463  809094 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 10:36:36.501567  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 10:36:36.514414  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 10:36:36.527777  809094 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 10:36:36.546772  809094 ssh_runner.go:195] Run: which cri-dockerd
	I0929 10:36:36.550960  809094 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0929 10:36:36.562724  809094 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0929 10:36:36.582736  809094 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0929 10:36:36.653862  809094 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0929 10:36:36.724043  809094 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0929 10:36:36.724204  809094 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0929 10:36:36.744794  809094 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 10:36:36.757465  809094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:36:36.826884  809094 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 10:36:37.610007  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 10:36:37.622898  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0929 10:36:37.636301  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 10:36:37.649738  809094 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0929 10:36:37.722666  809094 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0929 10:36:37.796505  809094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:36:37.867566  809094 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0929 10:36:37.892266  809094 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0929 10:36:37.904989  809094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:36:37.972029  809094 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0929 10:36:38.051455  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 10:36:38.065182  809094 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0929 10:36:38.065259  809094 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0929 10:36:38.069600  809094 start.go:563] Will wait 60s for crictl version
	I0929 10:36:38.069675  809094 ssh_runner.go:195] Run: which crictl
	I0929 10:36:38.073569  809094 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 10:36:38.110640  809094 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0929 10:36:38.110713  809094 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 10:36:38.140525  809094 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 10:36:38.168486  809094 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0929 10:36:38.168581  809094 cli_runner.go:164] Run: docker network inspect addons-529420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 10:36:38.186777  809094 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0929 10:36:38.191260  809094 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 10:36:38.203969  809094 kubeadm.go:875] updating cluster {Name:addons-529420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-529420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 10:36:38.204104  809094 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 10:36:38.204155  809094 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 10:36:38.227008  809094 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0929 10:36:38.227036  809094 docker.go:621] Images already preloaded, skipping extraction
	I0929 10:36:38.227106  809094 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 10:36:38.249601  809094 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0929 10:36:38.249632  809094 cache_images.go:85] Images are preloaded, skipping loading
	I0929 10:36:38.249646  809094 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0929 10:36:38.249774  809094 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-529420 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-529420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 10:36:38.249838  809094 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0929 10:36:38.304936  809094 cni.go:84] Creating CNI manager for ""
	I0929 10:36:38.304974  809094 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 10:36:38.305005  809094 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 10:36:38.305052  809094 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-529420 NodeName:addons-529420 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 10:36:38.305228  809094 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-529420"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 10:36:38.305304  809094 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 10:36:38.315730  809094 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 10:36:38.315813  809094 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 10:36:38.326318  809094 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0929 10:36:38.346540  809094 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 10:36:38.366500  809094 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I0929 10:36:38.386461  809094 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0929 10:36:38.390498  809094 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 10:36:38.403052  809094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:36:38.473834  809094 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 10:36:38.499163  809094 certs.go:68] Setting up /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420 for IP: 192.168.49.2
	I0929 10:36:38.499185  809094 certs.go:194] generating shared ca certs ...
	I0929 10:36:38.499202  809094 certs.go:226] acquiring lock for ca certs: {Name:mkc1a6b9be22ca2c6acbf4ae075c8649cd7bb029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:38.499334  809094 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21656-804126/.minikube/ca.key
	I0929 10:36:38.646613  809094 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-804126/.minikube/ca.crt ...
	I0929 10:36:38.646650  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/ca.crt: {Name:mk6af424fad818aae2ecb63097cb3cf4fa072fda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:38.646838  809094 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-804126/.minikube/ca.key ...
	I0929 10:36:38.646852  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/ca.key: {Name:mk82f55830669e727af6d6e5b82afa28223d781a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:38.646926  809094 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21656-804126/.minikube/proxy-client-ca.key
	I0929 10:36:38.721965  809094 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-804126/.minikube/proxy-client-ca.crt ...
	I0929 10:36:38.721996  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/proxy-client-ca.crt: {Name:mkfb52e92ed776fc971f1dc2b44df4c3ddc05e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:38.722184  809094 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-804126/.minikube/proxy-client-ca.key ...
	I0929 10:36:38.722197  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/proxy-client-ca.key: {Name:mk1e237f25fee0cc4ff615a96b95c3d6470c2cbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:38.722265  809094 certs.go:256] generating profile certs ...
	I0929 10:36:38.722327  809094 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.key
	I0929 10:36:38.722341  809094 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt with IP's: []
	I0929 10:36:39.380758  809094 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt ...
	I0929 10:36:39.380790  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt: {Name:mkc77ef14ddf2986f76245d42c12726cef377eb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:39.380976  809094 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.key ...
	I0929 10:36:39.380988  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.key: {Name:mke6b630fea371c2903c210193a2d24b7bb1f06f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:39.381077  809094 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.key.5ad07b27
	I0929 10:36:39.381099  809094 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.crt.5ad07b27 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0929 10:36:39.520731  809094 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.crt.5ad07b27 ...
	I0929 10:36:39.520767  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.crt.5ad07b27: {Name:mkcca52b9df73c7117f5aaacf4f623dd6c9d43c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:39.520941  809094 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.key.5ad07b27 ...
	I0929 10:36:39.520956  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.key.5ad07b27: {Name:mk26b5e71bc5d928280847a07112275ef7de7abe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:39.521026  809094 certs.go:381] copying /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.crt.5ad07b27 -> /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.crt
	I0929 10:36:39.521153  809094 certs.go:385] copying /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.key.5ad07b27 -> /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.key
	I0929 10:36:39.521216  809094 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.key
	I0929 10:36:39.521237  809094 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.crt with IP's: []
	I0929 10:36:39.588037  809094 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.crt ...
	I0929 10:36:39.588071  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.crt: {Name:mkf924850f5da9a86daf75692dc66662efd3fc9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:39.588249  809094 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.key ...
	I0929 10:36:39.588264  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.key: {Name:mkecb516723c573c2dd57316eed6cd1d6263ba08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:39.588475  809094 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 10:36:39.588513  809094 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca.pem (1082 bytes)
	I0929 10:36:39.588537  809094 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/cert.pem (1123 bytes)
	I0929 10:36:39.588557  809094 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/key.pem (1679 bytes)
	I0929 10:36:39.589229  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 10:36:39.617084  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0929 10:36:39.644094  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 10:36:39.671391  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 10:36:39.698770  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0929 10:36:39.726395  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 10:36:39.753719  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 10:36:39.781288  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 10:36:39.808573  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 10:36:39.838955  809094 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 10:36:39.859027  809094 ssh_runner.go:195] Run: openssl version
	I0929 10:36:39.865345  809094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 10:36:39.879241  809094 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:36:39.883476  809094 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:36:39.883550  809094 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:36:39.891125  809094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 10:36:39.902246  809094 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 10:36:39.906800  809094 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 10:36:39.906853  809094 kubeadm.go:392] StartCluster: {Name:addons-529420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-529420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketV
MnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:36:39.907037  809094 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0929 10:36:39.927225  809094 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 10:36:39.937579  809094 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 10:36:39.948182  809094 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0929 10:36:39.948260  809094 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 10:36:39.958474  809094 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 10:36:39.958499  809094 kubeadm.go:157] found existing configuration files:
	
	I0929 10:36:39.958547  809094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 10:36:39.968562  809094 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 10:36:39.968615  809094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 10:36:39.978358  809094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 10:36:39.988499  809094 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 10:36:39.988557  809094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 10:36:39.998306  809094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 10:36:40.008316  809094 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 10:36:40.008394  809094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 10:36:40.018692  809094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 10:36:40.028624  809094 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 10:36:40.028694  809094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 10:36:40.038485  809094 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0929 10:36:40.113436  809094 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1040-gcp\n", err: exit status 1
	I0929 10:36:40.172082  809094 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 10:36:49.487259  809094 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0929 10:36:49.487349  809094 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 10:36:49.487474  809094 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0929 10:36:49.487549  809094 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1040-gcp
	I0929 10:36:49.487599  809094 kubeadm.go:310] OS: Linux
	I0929 10:36:49.487674  809094 kubeadm.go:310] CGROUPS_CPU: enabled
	I0929 10:36:49.487738  809094 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0929 10:36:49.487808  809094 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0929 10:36:49.487877  809094 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0929 10:36:49.487945  809094 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0929 10:36:49.488009  809094 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0929 10:36:49.488084  809094 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0929 10:36:49.488155  809094 kubeadm.go:310] CGROUPS_IO: enabled
	I0929 10:36:49.488236  809094 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 10:36:49.488334  809094 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 10:36:49.488489  809094 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 10:36:49.488588  809094 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0929 10:36:49.490162  809094 out.go:252]   - Generating certificates and keys ...
	I0929 10:36:49.490252  809094 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 10:36:49.490322  809094 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 10:36:49.490390  809094 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 10:36:49.490463  809094 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 10:36:49.490546  809094 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 10:36:49.490627  809094 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 10:36:49.490693  809094 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 10:36:49.490785  809094 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-529420 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 10:36:49.490833  809094 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 10:36:49.490923  809094 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-529420 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 10:36:49.490993  809094 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 10:36:49.491078  809094 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 10:36:49.491125  809094 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 10:36:49.491174  809094 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 10:36:49.491224  809094 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 10:36:49.491279  809094 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 10:36:49.491356  809094 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 10:36:49.491494  809094 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 10:36:49.491545  809094 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 10:36:49.491621  809094 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 10:36:49.491692  809094 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 10:36:49.493671  809094 out.go:252]   - Booting up control plane ...
	I0929 10:36:49.493764  809094 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 10:36:49.493839  809094 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 10:36:49.493913  809094 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 10:36:49.494005  809094 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 10:36:49.494093  809094 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 10:36:49.494209  809094 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 10:36:49.494311  809094 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 10:36:49.494360  809094 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 10:36:49.494507  809094 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 10:36:49.494615  809094 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0929 10:36:49.494666  809094 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001039286s
	I0929 10:36:49.494753  809094 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 10:36:49.494843  809094 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0929 10:36:49.494970  809094 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 10:36:49.495066  809094 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 10:36:49.495136  809094 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.704142157s
	I0929 10:36:49.495206  809094 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.204406716s
	I0929 10:36:49.495290  809094 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 3.502382535s
	I0929 10:36:49.495410  809094 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 10:36:49.495564  809094 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 10:36:49.495618  809094 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 10:36:49.495799  809094 kubeadm.go:310] [mark-control-plane] Marking the node addons-529420 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 10:36:49.495870  809094 kubeadm.go:310] [bootstrap-token] Using token: yuv8vz.ch07v7kt1fq2e5wb
	I0929 10:36:49.498139  809094 out.go:252]   - Configuring RBAC rules ...
	I0929 10:36:49.498247  809094 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 10:36:49.498343  809094 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 10:36:49.498483  809094 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 10:36:49.498597  809094 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 10:36:49.498726  809094 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 10:36:49.498844  809094 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 10:36:49.498986  809094 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 10:36:49.499027  809094 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 10:36:49.499068  809094 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 10:36:49.499073  809094 kubeadm.go:310] 
	I0929 10:36:49.499124  809094 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 10:36:49.499129  809094 kubeadm.go:310] 
	I0929 10:36:49.499199  809094 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 10:36:49.499205  809094 kubeadm.go:310] 
	I0929 10:36:49.499230  809094 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 10:36:49.499295  809094 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 10:36:49.499340  809094 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 10:36:49.499345  809094 kubeadm.go:310] 
	I0929 10:36:49.499394  809094 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 10:36:49.499400  809094 kubeadm.go:310] 
	I0929 10:36:49.499496  809094 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 10:36:49.499503  809094 kubeadm.go:310] 
	I0929 10:36:49.499547  809094 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 10:36:49.499632  809094 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 10:36:49.499694  809094 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 10:36:49.499700  809094 kubeadm.go:310] 
	I0929 10:36:49.499768  809094 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 10:36:49.499834  809094 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 10:36:49.499840  809094 kubeadm.go:310] 
	I0929 10:36:49.499917  809094 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token yuv8vz.ch07v7kt1fq2e5wb \
	I0929 10:36:49.500007  809094 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:52ed2ed4898a1f91b8e8d0663d1b9110ba480fd365338f5c4892399f9ccfb123 \
	I0929 10:36:49.500027  809094 kubeadm.go:310] 	--control-plane 
	I0929 10:36:49.500033  809094 kubeadm.go:310] 
	I0929 10:36:49.500105  809094 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 10:36:49.500110  809094 kubeadm.go:310] 
	I0929 10:36:49.500183  809094 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token yuv8vz.ch07v7kt1fq2e5wb \
	I0929 10:36:49.500291  809094 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:52ed2ed4898a1f91b8e8d0663d1b9110ba480fd365338f5c4892399f9ccfb123 
	I0929 10:36:49.500305  809094 cni.go:84] Creating CNI manager for ""
	I0929 10:36:49.500322  809094 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 10:36:49.501607  809094 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0929 10:36:49.502541  809094 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0929 10:36:49.513883  809094 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0929 10:36:49.534107  809094 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 10:36:49.534178  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:49.534217  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-529420 minikube.k8s.io/updated_at=2025_09_29T10_36_49_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c1f958e1d15faaa2b94ae7399d1155627e45fcf8 minikube.k8s.io/name=addons-529420 minikube.k8s.io/primary=true
	I0929 10:36:49.634858  809094 ops.go:34] apiserver oom_adj: -16
	I0929 10:36:49.634879  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:50.135279  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:50.635199  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:51.135670  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:51.635820  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:52.135653  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:52.635900  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:53.135655  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:53.635560  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:53.702327  809094 kubeadm.go:1105] duration metric: took 4.168204298s to wait for elevateKubeSystemPrivileges
	I0929 10:36:53.702374  809094 kubeadm.go:394] duration metric: took 13.795525537s to StartCluster
	I0929 10:36:53.702401  809094 settings.go:142] acquiring lock: {Name:mkba08796c8ac17f1f94d6f1a6799aa6725155e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:53.702549  809094 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21656-804126/kubeconfig
	I0929 10:36:53.703073  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/kubeconfig: {Name:mkae1c4acd5fb2ec6cdb06d7e9593ac36dd4ccbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:53.703279  809094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 10:36:53.703300  809094 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 10:36:53.703359  809094 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0929 10:36:53.703546  809094 addons.go:69] Setting ingress=true in profile "addons-529420"
	I0929 10:36:53.703557  809094 addons.go:69] Setting metrics-server=true in profile "addons-529420"
	I0929 10:36:53.703575  809094 addons.go:69] Setting volcano=true in profile "addons-529420"
	I0929 10:36:53.703577  809094 addons.go:238] Setting addon ingress=true in "addons-529420"
	I0929 10:36:53.703583  809094 addons.go:238] Setting addon metrics-server=true in "addons-529420"
	I0929 10:36:53.703600  809094 addons.go:238] Setting addon volcano=true in "addons-529420"
	I0929 10:36:53.703611  809094 config.go:182] Loaded profile config "addons-529420": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 10:36:53.703623  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703615  809094 addons.go:69] Setting storage-provisioner=true in profile "addons-529420"
	I0929 10:36:53.703640  809094 addons.go:69] Setting inspektor-gadget=true in profile "addons-529420"
	I0929 10:36:53.703649  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703650  809094 addons.go:238] Setting addon inspektor-gadget=true in "addons-529420"
	I0929 10:36:53.703670  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703670  809094 addons.go:69] Setting default-storageclass=true in profile "addons-529420"
	I0929 10:36:53.703673  809094 addons.go:238] Setting addon storage-provisioner=true in "addons-529420"
	I0929 10:36:53.703685  809094 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-529420"
	I0929 10:36:53.703665  809094 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-529420"
	I0929 10:36:53.703742  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703752  809094 addons.go:69] Setting registry=true in profile "addons-529420"
	I0929 10:36:53.703784  809094 addons.go:238] Setting addon registry=true in "addons-529420"
	I0929 10:36:53.703797  809094 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-529420"
	I0929 10:36:53.703816  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703839  809094 addons.go:69] Setting registry-creds=true in profile "addons-529420"
	I0929 10:36:53.703853  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703864  809094 addons.go:238] Setting addon registry-creds=true in "addons-529420"
	I0929 10:36:53.703888  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.704091  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704229  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704236  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704241  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704269  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704280  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704369  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704383  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704822  809094 addons.go:69] Setting gcp-auth=true in profile "addons-529420"
	I0929 10:36:53.704853  809094 mustload.go:65] Loading cluster: addons-529420
	I0929 10:36:53.704946  809094 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-529420"
	I0929 10:36:53.704967  809094 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-529420"
	I0929 10:36:53.705010  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.705063  809094 config.go:182] Loaded profile config "addons-529420": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 10:36:53.705314  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.705514  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.705877  809094 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-529420"
	I0929 10:36:53.705901  809094 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-529420"
	I0929 10:36:53.705927  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.706142  809094 out.go:179] * Verifying Kubernetes components...
	I0929 10:36:53.706320  809094 addons.go:69] Setting volumesnapshots=true in profile "addons-529420"
	I0929 10:36:53.706369  809094 addons.go:238] Setting addon volumesnapshots=true in "addons-529420"
	I0929 10:36:53.706411  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703632  809094 addons.go:69] Setting ingress-dns=true in profile "addons-529420"
	I0929 10:36:53.706720  809094 addons.go:238] Setting addon ingress-dns=true in "addons-529420"
	I0929 10:36:53.706754  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.707040  809094 addons.go:69] Setting cloud-spanner=true in profile "addons-529420"
	I0929 10:36:53.707069  809094 addons.go:238] Setting addon cloud-spanner=true in "addons-529420"
	I0929 10:36:53.707096  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703568  809094 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-529420"
	I0929 10:36:53.707458  809094 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-529420"
	I0929 10:36:53.707552  809094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:36:53.703557  809094 addons.go:69] Setting yakd=true in profile "addons-529420"
	I0929 10:36:53.707960  809094 addons.go:238] Setting addon yakd=true in "addons-529420"
	I0929 10:36:53.707999  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703623  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.714143  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.714652  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.719013  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.719122  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.719970  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.722035  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.723786  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.778757  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.780491  809094 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0929 10:36:53.781884  809094 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0929 10:36:53.781909  809094 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0929 10:36:53.782011  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.783509  809094 addons.go:238] Setting addon default-storageclass=true in "addons-529420"
	I0929 10:36:53.783669  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.786763  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.807336  809094 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 10:36:53.807376  809094 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0929 10:36:53.807346  809094 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 10:36:53.807406  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0929 10:36:53.808907  809094 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0929 10:36:53.809965  809094 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-529420"
	I0929 10:36:53.810002  809094 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 10:36:53.810022  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 10:36:53.810026  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.810093  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.809085  809094 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0929 10:36:53.809123  809094 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0929 10:36:53.810358  809094 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0929 10:36:53.810445  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.810589  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.811657  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0929 10:36:53.811713  809094 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0929 10:36:53.811728  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0929 10:36:53.811729  809094 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 10:36:53.811785  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.812811  809094 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 10:36:53.812830  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0929 10:36:53.812883  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.813727  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0929 10:36:53.814898  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0929 10:36:53.816283  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0929 10:36:53.817528  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0929 10:36:53.818644  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0929 10:36:53.820451  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0929 10:36:53.820547  809094 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0929 10:36:53.820676  809094 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0929 10:36:53.821594  809094 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0929 10:36:53.821612  809094 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0929 10:36:53.821686  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.822262  809094 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 10:36:53.823314  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0929 10:36:53.822343  809094 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 10:36:53.823758  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0929 10:36:53.823888  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.824274  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.826866  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0929 10:36:53.828202  809094 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0929 10:36:53.828229  809094 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0929 10:36:53.828293  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.831468  809094 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0929 10:36:53.831558  809094 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0929 10:36:53.831581  809094 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I0929 10:36:53.831884  809094 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0929 10:36:53.835051  809094 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 10:36:53.835084  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0929 10:36:53.835153  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.836300  809094 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 10:36:53.836321  809094 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 10:36:53.836386  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.836883  809094 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I0929 10:36:53.837031  809094 out.go:179]   - Using image docker.io/registry:3.0.0
	I0929 10:36:53.838461  809094 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0929 10:36:53.838481  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0929 10:36:53.838563  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.839840  809094 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I0929 10:36:53.846729  809094 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0929 10:36:53.847497  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I0929 10:36:53.847543  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.847569  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.862719  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.868937  809094 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0929 10:36:53.870360  809094 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 10:36:53.870448  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0929 10:36:53.870559  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.876206  809094 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 10:36:53.876242  809094 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 10:36:53.879516  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.881669  809094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 10:36:53.897463  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.897986  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.902073  809094 out.go:179]   - Using image docker.io/busybox:stable
	I0929 10:36:53.907027  809094 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0929 10:36:53.908323  809094 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 10:36:53.908508  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0929 10:36:53.908601  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.912847  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.916620  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.916689  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.918495  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.924984  809094 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 10:36:53.927315  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.928318  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.929791  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.931522  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.931855  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.932316  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	W0929 10:36:53.933912  809094 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0929 10:36:53.933975  809094 retry.go:31] will retry after 222.37842ms: ssh: handshake failed: EOF
	I0929 10:36:53.947593  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.951956  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:54.017495  809094 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0929 10:36:54.017528  809094 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0929 10:36:54.037524  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0929 10:36:54.047797  809094 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0929 10:36:54.047824  809094 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0929 10:36:54.088850  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 10:36:54.102982  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 10:36:54.104728  809094 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0929 10:36:54.104759  809094 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0929 10:36:54.105824  809094 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:36:54.105847  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0929 10:36:54.113997  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 10:36:54.115206  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 10:36:54.115595  809094 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0929 10:36:54.115658  809094 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0929 10:36:54.115954  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 10:36:54.131289  809094 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0929 10:36:54.131324  809094 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0929 10:36:54.141402  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 10:36:54.141685  809094 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0929 10:36:54.141711  809094 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0929 10:36:54.146347  809094 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 10:36:54.146375  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0929 10:36:54.148830  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0929 10:36:54.158233  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 10:36:54.187390  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:36:54.199350  809094 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0929 10:36:54.199378  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0929 10:36:54.219151  809094 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0929 10:36:54.219235  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0929 10:36:54.228377  809094 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 10:36:54.228502  809094 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 10:36:54.236642  809094 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0929 10:36:54.236676  809094 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0929 10:36:54.249985  809094 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0929 10:36:54.250018  809094 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0929 10:36:54.257015  809094 node_ready.go:35] waiting up to 6m0s for node "addons-529420" to be "Ready" ...
	I0929 10:36:54.257465  809094 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0929 10:36:54.260181  809094 node_ready.go:49] node "addons-529420" is "Ready"
	I0929 10:36:54.260213  809094 node_ready.go:38] duration metric: took 3.156144ms for node "addons-529420" to be "Ready" ...
	I0929 10:36:54.260251  809094 api_server.go:52] waiting for apiserver process to appear ...
	I0929 10:36:54.260304  809094 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 10:36:54.275549  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0929 10:36:54.288410  809094 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 10:36:54.288527  809094 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 10:36:54.309547  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0929 10:36:54.326392  809094 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0929 10:36:54.326462  809094 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0929 10:36:54.371852  809094 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0929 10:36:54.371884  809094 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0929 10:36:54.385903  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 10:36:54.406122  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 10:36:54.441278  809094 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0929 10:36:54.441321  809094 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0929 10:36:54.475876  809094 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0929 10:36:54.475975  809094 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0929 10:36:54.558603  809094 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 10:36:54.558634  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0929 10:36:54.567254  809094 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0929 10:36:54.567363  809094 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0929 10:36:54.637129  809094 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0929 10:36:54.637160  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0929 10:36:54.681921  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 10:36:54.728318  809094 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0929 10:36:54.728360  809094 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0929 10:36:54.766660  809094 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-529420" context rescaled to 1 replicas
	I0929 10:36:54.848450  809094 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0929 10:36:54.848663  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0929 10:36:54.946361  809094 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0929 10:36:54.946523  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0929 10:36:55.079437  809094 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 10:36:55.079471  809094 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0929 10:36:55.190890  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 10:36:55.493984  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.390947237s)
	I0929 10:36:55.494004  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.379973258s)
	I0929 10:36:55.494048  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.378812825s)
	I0929 10:36:56.183759  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.042240387s)
	I0929 10:36:56.184086  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (2.068081762s)
	I0929 10:36:56.184109  809094 addons.go:479] Verifying addon ingress=true in "addons-529420"
	I0929 10:36:56.185647  809094 out.go:179] * Verifying ingress addon...
	I0929 10:36:56.187598  809094 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0929 10:36:56.207003  809094 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0929 10:36:56.207034  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:56.705051  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:57.193166  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:57.222996  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.064717663s)
	I0929 10:36:57.223366  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.035944416s)
	W0929 10:36:57.223400  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:36:57.223437  809094 retry.go:31] will retry after 152.765294ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:36:57.223474  809094 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.963156736s)
	I0929 10:36:57.223491  809094 api_server.go:72] duration metric: took 3.520164967s to wait for apiserver process to appear ...
	I0929 10:36:57.223498  809094 api_server.go:88] waiting for apiserver healthz status ...
	I0929 10:36:57.223518  809094 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0929 10:36:57.223916  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (2.948332459s)
	I0929 10:36:57.223942  809094 addons.go:479] Verifying addon registry=true in "addons-529420"
	I0929 10:36:57.224224  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (2.914632573s)
	I0929 10:36:57.224522  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.838576934s)
	I0929 10:36:57.224572  809094 addons.go:479] Verifying addon metrics-server=true in "addons-529420"
	I0929 10:36:57.224580  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.818313312s)
	I0929 10:36:57.224688  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.542731559s)
	W0929 10:36:57.224716  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 10:36:57.224732  809094 retry.go:31] will retry after 151.241621ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 10:36:57.224940  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.034013059s)
	I0929 10:36:57.224964  809094 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-529420"
	I0929 10:36:57.225804  809094 out.go:179] * Verifying registry addon...
	I0929 10:36:57.225829  809094 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-529420 service yakd-dashboard -n yakd-dashboard
	
	I0929 10:36:57.227287  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.078403021s)
	I0929 10:36:57.228280  809094 out.go:179] * Verifying csi-hostpath-driver addon...
	I0929 10:36:57.229531  809094 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0929 10:36:57.230385  809094 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0929 10:36:57.236158  809094 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0929 10:36:57.238077  809094 api_server.go:141] control plane version: v1.34.0
	I0929 10:36:57.238109  809094 api_server.go:131] duration metric: took 14.602902ms to wait for apiserver health ...
	I0929 10:36:57.238122  809094 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 10:36:57.295055  809094 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 10:36:57.295086  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:36:57.297613  809094 system_pods.go:59] 19 kube-system pods found
	I0929 10:36:57.297672  809094 system_pods.go:61] "amd-gpu-device-plugin-5pr2g" [d246c61e-5b06-4a8a-b834-a8104ed29092] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 10:36:57.297684  809094 system_pods.go:61] "coredns-66bc5c9577-5qvcx" [431a8d9d-251c-4916-9c89-87d909588d39] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:36:57.297700  809094 system_pods.go:61] "csi-hostpath-attacher-0" [70a20e1e-6207-4b51-afa5-e013bc35656f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 10:36:57.297708  809094 system_pods.go:61] "csi-hostpath-resizer-0" [9de479e4-bad8-4f15-bfd7-f7f8a2dd12e6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 10:36:57.297716  809094 system_pods.go:61] "csi-hostpathplugin-z7562" [bb4e1eda-850e-4da2-ae44-f84511d3ee67] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 10:36:57.297725  809094 system_pods.go:61] "etcd-addons-529420" [1b81bfdd-919b-4cd8-a60e-22f79112660f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 10:36:57.297730  809094 system_pods.go:61] "kube-apiserver-addons-529420" [47505dbb-1863-4367-98f4-9f7ebb2c9415] Running
	I0929 10:36:57.297746  809094 system_pods.go:61] "kube-controller-manager-addons-529420" [37496a5f-d392-4a23-b1a5-1c7a43d653a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 10:36:57.297755  809094 system_pods.go:61] "kube-ingress-dns-minikube" [a3dd66da-3ebf-43f5-80b9-e99fc50dfcb6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 10:36:57.297760  809094 system_pods.go:61] "kube-proxy-kxjqx" [99197e14-7d79-403b-87f4-b1f17f52f9f5] Running
	I0929 10:36:57.297767  809094 system_pods.go:61] "kube-scheduler-addons-529420" [ebe74bc8-ca56-4474-99cf-5b6633e8a102] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 10:36:57.297774  809094 system_pods.go:61] "metrics-server-85b7d694d7-6kvbb" [2261b64a-3e18-4fca-8fb4-4a870b5d302f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 10:36:57.297781  809094 system_pods.go:61] "nvidia-device-plugin-daemonset-rv5nz" [70340435-f458-4a2a-a617-185f04893887] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 10:36:57.297791  809094 system_pods.go:61] "registry-66898fdd98-9545s" [6b827e39-0a4c-4539-b408-9bfd1cdda6c1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 10:36:57.297799  809094 system_pods.go:61] "registry-creds-764b6fb674-ssj2n" [a5aecf48-b091-49c9-8743-a9d358c2fcc4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 10:36:57.297807  809094 system_pods.go:61] "registry-proxy-b2bpz" [87ba9cf0-8f09-4423-b674-f77637763bcc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 10:36:57.297814  809094 system_pods.go:61] "snapshot-controller-7d9fbc56b8-bxfh2" [bb19aa00-633c-4f38-88f2-3cd29546aec9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:36:57.297826  809094 system_pods.go:61] "snapshot-controller-7d9fbc56b8-pwx9p" [5203fa8d-6a4b-4bc9-89be-9a58d780e11e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:36:57.297834  809094 system_pods.go:61] "storage-provisioner" [c791ec78-6e64-4b97-a928-e3f1e19e92c1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 10:36:57.297843  809094 system_pods.go:74] duration metric: took 59.712387ms to wait for pod list to return data ...
	I0929 10:36:57.297854  809094 default_sa.go:34] waiting for default service account to be created ...
	I0929 10:36:57.298160  809094 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 10:36:57.298172  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:36:57.305651  809094 default_sa.go:45] found service account: "default"
	I0929 10:36:57.305752  809094 default_sa.go:55] duration metric: took 7.889394ms for default service account to be created ...
	I0929 10:36:57.305777  809094 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 10:36:57.314079  809094 system_pods.go:86] 19 kube-system pods found
	I0929 10:36:57.314192  809094 system_pods.go:89] "amd-gpu-device-plugin-5pr2g" [d246c61e-5b06-4a8a-b834-a8104ed29092] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 10:36:57.314243  809094 system_pods.go:89] "coredns-66bc5c9577-5qvcx" [431a8d9d-251c-4916-9c89-87d909588d39] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:36:57.314277  809094 system_pods.go:89] "csi-hostpath-attacher-0" [70a20e1e-6207-4b51-afa5-e013bc35656f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 10:36:57.314298  809094 system_pods.go:89] "csi-hostpath-resizer-0" [9de479e4-bad8-4f15-bfd7-f7f8a2dd12e6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 10:36:57.314318  809094 system_pods.go:89] "csi-hostpathplugin-z7562" [bb4e1eda-850e-4da2-ae44-f84511d3ee67] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 10:36:57.314350  809094 system_pods.go:89] "etcd-addons-529420" [1b81bfdd-919b-4cd8-a60e-22f79112660f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 10:36:57.314386  809094 system_pods.go:89] "kube-apiserver-addons-529420" [47505dbb-1863-4367-98f4-9f7ebb2c9415] Running
	I0929 10:36:57.314408  809094 system_pods.go:89] "kube-controller-manager-addons-529420" [37496a5f-d392-4a23-b1a5-1c7a43d653a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 10:36:57.314453  809094 system_pods.go:89] "kube-ingress-dns-minikube" [a3dd66da-3ebf-43f5-80b9-e99fc50dfcb6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 10:36:57.314472  809094 system_pods.go:89] "kube-proxy-kxjqx" [99197e14-7d79-403b-87f4-b1f17f52f9f5] Running
	I0929 10:36:57.314489  809094 system_pods.go:89] "kube-scheduler-addons-529420" [ebe74bc8-ca56-4474-99cf-5b6633e8a102] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 10:36:57.314521  809094 system_pods.go:89] "metrics-server-85b7d694d7-6kvbb" [2261b64a-3e18-4fca-8fb4-4a870b5d302f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 10:36:57.314541  809094 system_pods.go:89] "nvidia-device-plugin-daemonset-rv5nz" [70340435-f458-4a2a-a617-185f04893887] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 10:36:57.314558  809094 system_pods.go:89] "registry-66898fdd98-9545s" [6b827e39-0a4c-4539-b408-9bfd1cdda6c1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 10:36:57.314575  809094 system_pods.go:89] "registry-creds-764b6fb674-ssj2n" [a5aecf48-b091-49c9-8743-a9d358c2fcc4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 10:36:57.314623  809094 system_pods.go:89] "registry-proxy-b2bpz" [87ba9cf0-8f09-4423-b674-f77637763bcc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 10:36:57.314641  809094 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bxfh2" [bb19aa00-633c-4f38-88f2-3cd29546aec9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:36:57.314661  809094 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pwx9p" [5203fa8d-6a4b-4bc9-89be-9a58d780e11e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:36:57.314703  809094 system_pods.go:89] "storage-provisioner" [c791ec78-6e64-4b97-a928-e3f1e19e92c1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 10:36:57.314741  809094 system_pods.go:126] duration metric: took 8.926144ms to wait for k8s-apps to be running ...
	I0929 10:36:57.314762  809094 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 10:36:57.314849  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 10:36:57.336098  809094 system_svc.go:56] duration metric: took 21.32375ms WaitForService to wait for kubelet
	I0929 10:36:57.336133  809094 kubeadm.go:578] duration metric: took 3.632804576s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 10:36:57.336158  809094 node_conditions.go:102] verifying NodePressure condition ...
	I0929 10:36:57.344338  809094 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 10:36:57.344379  809094 node_conditions.go:123] node cpu capacity is 8
	I0929 10:36:57.344396  809094 node_conditions.go:105] duration metric: took 8.231792ms to run NodePressure ...
	I0929 10:36:57.344410  809094 start.go:241] waiting for startup goroutines ...
	I0929 10:36:57.376412  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 10:36:57.376539  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:36:57.692404  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:57.793184  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:36:57.793318  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:36:58.191147  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:58.233460  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:36:58.233602  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:36:58.692356  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:58.733694  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:36:58.733886  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:36:59.194666  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:59.293196  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:36:59.293196  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:36:59.692198  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:59.792340  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:36:59.792540  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:00.126856  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.750359151s)
	I0929 10:37:00.126994  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.750412046s)
	W0929 10:37:00.127048  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:00.127077  809094 retry.go:31] will retry after 243.584374ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:00.191955  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:00.292723  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:00.292816  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:00.371587  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:00.692541  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:00.733843  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:00.734315  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:01.147379  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:01.147434  809094 retry.go:31] will retry after 313.436894ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:01.192318  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:01.194800  809094 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0929 10:37:01.194881  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:37:01.217609  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:37:01.269785  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:01.269897  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:01.337246  809094 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0929 10:37:01.363074  809094 addons.go:238] Setting addon gcp-auth=true in "addons-529420"
	I0929 10:37:01.363143  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:37:01.363713  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:37:01.386898  809094 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0929 10:37:01.386956  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:37:01.409699  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:37:01.461472  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:01.691916  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:01.733454  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:01.734086  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:02.191808  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 10:37:02.208498  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:02.208536  809094 retry.go:31] will retry after 1.150273014s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:02.211872  809094 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 10:37:02.213574  809094 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0929 10:37:02.214876  809094 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0929 10:37:02.214902  809094 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0929 10:37:02.234020  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:02.234162  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:02.241298  809094 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0929 10:37:02.241331  809094 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0929 10:37:02.266924  809094 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 10:37:02.266946  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0929 10:37:02.293874  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 10:37:02.684619  809094 addons.go:479] Verifying addon gcp-auth=true in "addons-529420"
	I0929 10:37:02.685986  809094 out.go:179] * Verifying gcp-auth addon...
	I0929 10:37:02.688185  809094 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0929 10:37:02.693394  809094 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0929 10:37:02.693428  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:02.693405  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:02.733763  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:02.733885  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:03.192364  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:03.192855  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:03.234316  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:03.234465  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:03.359547  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:03.692021  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:03.692117  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:03.733209  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:03.734038  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:04.191279  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:04.191358  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:04.233692  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:04.234492  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:04.287957  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:04.287997  809094 retry.go:31] will retry after 1.511178059s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:04.692167  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:04.692234  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:04.733131  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:04.734062  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:05.191715  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:05.191921  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:05.233860  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:05.234129  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:05.691361  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:05.691521  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:05.734135  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:05.734258  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:05.800129  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:06.191447  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:06.191552  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:06.233874  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:06.234076  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:06.411195  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:06.411250  809094 retry.go:31] will retry after 2.100064857s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:06.691798  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:06.692073  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:06.732827  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:06.733809  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:07.191508  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:07.191709  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:07.292363  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:07.292457  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:07.691546  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:07.691574  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:07.734030  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:07.734125  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:08.191781  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:08.191814  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:08.232967  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:08.233754  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:08.512112  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:08.691009  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:08.691510  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:08.733861  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:08.733938  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:09.104861  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:09.104897  809094 retry.go:31] will retry after 3.376157006s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:09.191367  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:09.191595  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:09.233494  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:09.233556  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:09.691727  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:09.691798  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:09.733640  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:09.733721  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:10.190922  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:10.191004  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:10.233082  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:10.234008  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:10.691979  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:10.692058  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:10.733220  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:10.733906  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:11.191409  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:11.191511  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:11.234068  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:11.234155  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:11.691573  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:11.691635  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:11.732866  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:11.733594  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:12.191979  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:12.192191  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:12.232888  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:12.233877  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:12.481250  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:12.691033  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:12.691374  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:12.733300  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:12.733527  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:13.054889  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:13.054935  809094 retry.go:31] will retry after 5.523481945s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:13.191285  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:13.191341  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:13.233717  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:13.233802  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:13.691877  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:13.691923  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:13.732931  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:13.733745  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:14.191616  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:14.191752  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:14.233768  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:14.233879  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:14.691856  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:14.691909  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:14.733917  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:14.733984  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:15.191840  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:15.191956  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:15.232857  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:15.233963  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:15.691580  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:15.691731  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:15.733913  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:15.735663  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:16.191071  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:16.191303  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:16.233820  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:16.233918  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:16.691314  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:16.691386  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:16.734056  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:16.734170  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:17.191402  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:17.191593  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:17.233380  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:17.233625  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:17.691493  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:17.691699  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:17.732648  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:17.733372  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:18.191911  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:18.192046  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:18.233634  809094 kapi.go:107] duration metric: took 21.004035787s to wait for kubernetes.io/minikube-addons=registry ...
	I0929 10:37:18.234214  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:18.578952  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:18.692732  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:18.693005  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:18.734571  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:19.192457  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:19.192698  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:19.236192  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:19.359657  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:19.359698  809094 retry.go:31] will retry after 5.374303415s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:19.691689  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:19.691870  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:19.734388  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:20.192362  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:20.192436  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:20.235035  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:20.691493  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:20.691559  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:20.734068  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:21.191686  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:21.191754  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:21.234043  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:21.691673  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:21.691816  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:21.734360  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:22.192209  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:22.192364  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:22.234765  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:22.691170  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:22.691296  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:22.734585  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:23.191838  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:23.191855  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:23.234192  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:23.691617  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:23.691827  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:23.734342  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:24.191851  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:24.191914  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:24.234052  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:24.691665  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:24.691766  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:24.734744  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:24.778008  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:25.191224  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:25.191245  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:25.233932  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:25.323222  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:25.323263  809094 retry.go:31] will retry after 4.946418269s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:25.692211  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:25.692261  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:25.793190  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:26.191796  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:26.192044  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:26.234744  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:26.691669  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:26.691730  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:26.792265  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:27.191948  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:27.192054  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:27.293582  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:27.691274  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:27.691324  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:27.734583  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:28.191896  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:28.192035  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:28.234579  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:28.691393  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:28.691468  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:28.745258  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:29.191857  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:29.192145  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:29.293661  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:29.692369  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:29.692409  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:29.734847  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:30.191284  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:30.191549  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:30.235247  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:30.270015  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:30.692450  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:30.692468  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:30.734382  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:30.874687  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:30.874717  809094 retry.go:31] will retry after 9.611404012s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:31.191034  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:31.191136  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:31.234596  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:31.691789  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:31.691851  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:31.734487  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:32.192228  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:32.192397  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:32.234895  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:32.691647  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:32.691826  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:32.734231  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:33.191791  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:33.191803  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:33.293681  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:33.691499  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:33.691575  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:33.733652  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:34.191347  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:34.191394  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:34.235018  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:34.692239  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:34.692288  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:34.734356  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:35.191945  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:35.192042  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:35.233802  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:35.691543  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:35.691708  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:35.733748  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:36.190728  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:36.190927  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:36.234247  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:36.691843  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:36.691958  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:36.734324  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:37.232504  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:37.232580  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:37.234832  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:37.691206  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:37.691326  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:37.734619  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:38.191595  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:38.191680  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:38.292586  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:38.692258  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:38.692407  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:38.734713  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:39.191465  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:39.191541  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:39.233459  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:39.692264  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:39.692361  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:39.736727  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:40.191041  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:40.191241  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:40.282297  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:40.487332  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:40.700928  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:40.700982  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:40.734342  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:41.111058  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:41.111096  809094 retry.go:31] will retry after 27.798318096s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:41.192192  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:41.192448  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:41.234734  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:41.691467  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:41.691559  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:41.738052  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:42.191281  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:42.191334  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:42.234363  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:42.692206  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:42.692364  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:42.735143  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:43.192059  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:43.192131  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:43.234581  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:43.691539  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:43.691592  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:43.734168  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:44.192399  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:44.192554  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:44.235151  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:44.712201  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:44.712256  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:44.814275  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:45.191818  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:45.193367  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:45.292640  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:45.692318  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:45.692523  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:45.734670  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:46.191229  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:46.191310  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:46.234266  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:46.691380  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:46.691528  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:46.735585  809094 kapi.go:107] duration metric: took 49.505194902s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0929 10:37:47.191025  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:47.191255  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:47.691635  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:47.691700  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:48.191562  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:48.191703  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:48.692227  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:48.692295  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:49.191449  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:49.191458  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:49.691391  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:49.691392  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:50.191310  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:50.191376  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:50.691312  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:50.691530  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:51.190897  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:51.191071  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:51.690962  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:51.690975  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:52.191596  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:52.191799  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:52.691821  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:52.691860  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:53.192013  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:53.192203  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:53.690815  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:53.691412  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:54.191531  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:54.191563  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:54.690735  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:54.691192  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:55.191955  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:55.192190  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:55.691501  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:55.691567  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:56.191135  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:56.191259  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:56.691523  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:56.691569  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:57.190892  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:57.191196  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:57.691389  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:57.691460  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:58.191486  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:58.191637  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:58.691468  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:58.691615  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:59.191501  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:59.191524  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:59.691453  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:59.691624  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:00.191269  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:00.191495  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:00.691597  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:00.691645  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:01.192840  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:01.193251  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:01.692293  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:01.692359  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:02.191486  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:02.191506  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:02.691685  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:02.691815  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:03.191877  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:03.191952  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:03.692663  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:03.692768  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:04.191650  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:04.191736  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:04.691141  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:04.691354  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:05.191572  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:05.191608  809094 kapi.go:107] duration metric: took 1m9.004009852s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0929 10:38:05.692502  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:06.192606  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:06.692131  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:07.191846  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:07.691923  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:08.192050  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:08.691556  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:08.910309  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:38:09.192274  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 10:38:09.505831  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:38:09.505869  809094 retry.go:31] will retry after 26.574045109s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:38:09.692484  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:10.191984  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:10.692204  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:11.191688  809094 kapi.go:107] duration metric: took 1m8.503599106s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0929 10:38:11.193489  809094 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-529420 cluster.
	I0929 10:38:11.194659  809094 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0929 10:38:11.195892  809094 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0929 10:38:36.081555  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0929 10:38:36.665106  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 10:38:36.665238  809094 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0929 10:38:36.667274  809094 out.go:179] * Enabled addons: amd-gpu-device-plugin, cloud-spanner, storage-provisioner, nvidia-device-plugin, default-storageclass, registry-creds, metrics-server, ingress-dns, volcano, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0929 10:38:36.668228  809094 addons.go:514] duration metric: took 1m42.964873486s for enable addons: enabled=[amd-gpu-device-plugin cloud-spanner storage-provisioner nvidia-device-plugin default-storageclass registry-creds metrics-server ingress-dns volcano yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0929 10:38:36.668278  809094 start.go:246] waiting for cluster config update ...
	I0929 10:38:36.668307  809094 start.go:255] writing updated cluster config ...
	I0929 10:38:36.668641  809094 ssh_runner.go:195] Run: rm -f paused
	I0929 10:38:36.672803  809094 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 10:38:36.676997  809094 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5qvcx" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:36.681719  809094 pod_ready.go:94] pod "coredns-66bc5c9577-5qvcx" is "Ready"
	I0929 10:38:36.681743  809094 pod_ready.go:86] duration metric: took 4.720124ms for pod "coredns-66bc5c9577-5qvcx" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:36.684044  809094 pod_ready.go:83] waiting for pod "etcd-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:36.688395  809094 pod_ready.go:94] pod "etcd-addons-529420" is "Ready"
	I0929 10:38:36.688443  809094 pod_ready.go:86] duration metric: took 4.375753ms for pod "etcd-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:36.690812  809094 pod_ready.go:83] waiting for pod "kube-apiserver-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:36.694897  809094 pod_ready.go:94] pod "kube-apiserver-addons-529420" is "Ready"
	I0929 10:38:36.694938  809094 pod_ready.go:86] duration metric: took 4.098109ms for pod "kube-apiserver-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:36.696911  809094 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:37.077390  809094 pod_ready.go:94] pod "kube-controller-manager-addons-529420" is "Ready"
	I0929 10:38:37.077434  809094 pod_ready.go:86] duration metric: took 380.496118ms for pod "kube-controller-manager-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:37.276699  809094 pod_ready.go:83] waiting for pod "kube-proxy-kxjqx" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:37.677151  809094 pod_ready.go:94] pod "kube-proxy-kxjqx" is "Ready"
	I0929 10:38:37.677180  809094 pod_ready.go:86] duration metric: took 400.453557ms for pod "kube-proxy-kxjqx" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:37.877980  809094 pod_ready.go:83] waiting for pod "kube-scheduler-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:38.277511  809094 pod_ready.go:94] pod "kube-scheduler-addons-529420" is "Ready"
	I0929 10:38:38.277542  809094 pod_ready.go:86] duration metric: took 399.519909ms for pod "kube-scheduler-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:38.277554  809094 pod_ready.go:40] duration metric: took 1.604719848s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 10:38:38.326458  809094 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 10:38:38.327948  809094 out.go:179] * Done! kubectl is now configured to use "addons-529420" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 29 10:38:54 addons-529420 dockerd[1117]: time="2025-09-29T10:38:54.431294423Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:38:55 addons-529420 dockerd[1117]: time="2025-09-29T10:38:55.956282396Z" level=warning msg="reference for unknown type: " digest="sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624" remote="docker.io/marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624"
	Sep 29 10:38:56 addons-529420 dockerd[1117]: time="2025-09-29T10:38:56.432121846Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:39:01 addons-529420 dockerd[1117]: time="2025-09-29T10:39:01.956384722Z" level=warning msg="reference for unknown type: " digest="sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242" remote="docker.io/volcanosh/vc-controller-manager@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242"
	Sep 29 10:39:02 addons-529420 dockerd[1117]: time="2025-09-29T10:39:02.432037336Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:39:07 addons-529420 dockerd[1117]: time="2025-09-29T10:39:07.951136733Z" level=warning msg="reference for unknown type: " digest="sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34" remote="docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34"
	Sep 29 10:39:08 addons-529420 dockerd[1117]: time="2025-09-29T10:39:08.429645691Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:40:25 addons-529420 dockerd[1117]: time="2025-09-29T10:40:25.959816517Z" level=warning msg="reference for unknown type: " digest="sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001" remote="docker.io/volcanosh/vc-webhook-manager@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001"
	Sep 29 10:40:26 addons-529420 dockerd[1117]: time="2025-09-29T10:40:26.753947313Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:40:26 addons-529420 cri-dockerd[1422]: time="2025-09-29T10:40:26Z" level=info msg="Stop pulling image docker.io/volcanosh/vc-webhook-manager:v1.13.0@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001: docker.io/volcanosh/vc-webhook-manager@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001: Pulling from volcanosh/vc-webhook-manager"
	Sep 29 10:40:26 addons-529420 dockerd[1117]: time="2025-09-29T10:40:26.993937726Z" level=warning msg="reference for unknown type: " digest="sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624" remote="docker.io/marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624"
	Sep 29 10:40:27 addons-529420 dockerd[1117]: time="2025-09-29T10:40:27.472009060Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:40:30 addons-529420 dockerd[1117]: time="2025-09-29T10:40:30.953819279Z" level=warning msg="reference for unknown type: " digest="sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242" remote="docker.io/volcanosh/vc-controller-manager@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242"
	Sep 29 10:40:31 addons-529420 dockerd[1117]: time="2025-09-29T10:40:31.424851026Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:40:37 addons-529420 dockerd[1117]: time="2025-09-29T10:40:37.949542250Z" level=warning msg="reference for unknown type: " digest="sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34" remote="docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34"
	Sep 29 10:40:38 addons-529420 dockerd[1117]: time="2025-09-29T10:40:38.420898087Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:43:08 addons-529420 dockerd[1117]: time="2025-09-29T10:43:08.953478831Z" level=warning msg="reference for unknown type: " digest="sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001" remote="docker.io/volcanosh/vc-webhook-manager@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001"
	Sep 29 10:43:09 addons-529420 dockerd[1117]: time="2025-09-29T10:43:09.710629510Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:43:09 addons-529420 cri-dockerd[1422]: time="2025-09-29T10:43:09Z" level=info msg="Stop pulling image docker.io/volcanosh/vc-webhook-manager:v1.13.0@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001: docker.io/volcanosh/vc-webhook-manager@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001: Pulling from volcanosh/vc-webhook-manager"
	Sep 29 10:43:11 addons-529420 dockerd[1117]: time="2025-09-29T10:43:11.953372673Z" level=warning msg="reference for unknown type: " digest="sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242" remote="docker.io/volcanosh/vc-controller-manager@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242"
	Sep 29 10:43:12 addons-529420 dockerd[1117]: time="2025-09-29T10:43:12.421405360Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:43:14 addons-529420 dockerd[1117]: time="2025-09-29T10:43:14.956439744Z" level=warning msg="reference for unknown type: " digest="sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624" remote="docker.io/marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624"
	Sep 29 10:43:15 addons-529420 dockerd[1117]: time="2025-09-29T10:43:15.431004627Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:43:27 addons-529420 dockerd[1117]: time="2025-09-29T10:43:27.956159885Z" level=warning msg="reference for unknown type: " digest="sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34" remote="docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34"
	Sep 29 10:43:28 addons-529420 dockerd[1117]: time="2025-09-29T10:43:28.428600521Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	869afa433675e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7                                 6 minutes ago       Running             gcp-auth                                 0                   54951ac9f0038       gcp-auth-78565c9fb4-r7k7m
	53f450233c951       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef                             6 minutes ago       Running             controller                               0                   7e7e5a36e33c6       ingress-nginx-controller-9cc49f96f-pxng2
	b8bc330ded02a       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          6 minutes ago       Running             csi-snapshotter                          0                   ebcc1d0ccdc31       csi-hostpathplugin-z7562
	097ec25364497       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          6 minutes ago       Running             csi-provisioner                          0                   ebcc1d0ccdc31       csi-hostpathplugin-z7562
	42c80806f3442       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            6 minutes ago       Running             liveness-probe                           0                   ebcc1d0ccdc31       csi-hostpathplugin-z7562
	deb6678813fad       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           6 minutes ago       Running             hostpath                                 0                   ebcc1d0ccdc31       csi-hostpathplugin-z7562
	4c9a460366a7b       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                6 minutes ago       Running             node-driver-registrar                    0                   ebcc1d0ccdc31       csi-hostpathplugin-z7562
	8c797f52e7973       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              7 minutes ago       Running             csi-resizer                              0                   f5028bed08a36       csi-hostpath-resizer-0
	8ea1f8a967cfc       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   7 minutes ago       Running             csi-external-health-monitor-controller   0                   ebcc1d0ccdc31       csi-hostpathplugin-z7562
	7ed61e7756a57       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             7 minutes ago       Running             csi-attacher                             0                   80edbd2ea7f0f       csi-hostpath-attacher-0
	ac3df2986f815       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      7 minutes ago       Running             volume-snapshot-controller               0                   2f35c5222a676       snapshot-controller-7d9fbc56b8-pwx9p
	cb2fd8f092644       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      7 minutes ago       Running             volume-snapshot-controller               0                   ba080eb06cf32       snapshot-controller-7d9fbc56b8-bxfh2
	2575c5650e053       8c217da6734db                                                                                                                                7 minutes ago       Exited              patch                                    1                   8b7ddb7c14d4d       ingress-nginx-admission-patch-ncp6w
	788b28f41ae4a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   7 minutes ago       Exited              create                                   0                   fe7563f8d917a       ingress-nginx-admission-create-tj9td
	866b95aa367a6       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5                            7 minutes ago       Running             gadget                                   0                   7ee25e2189efc       gadget-8bvtc
	df4679be4c526       registry.k8s.io/metrics-server/metrics-server@sha256:89258156d0e9af60403eafd44da9676fd66f600c7934d468ccc17e42b199aee2                        7 minutes ago       Running             metrics-server                           0                   4fc1af4f060a0       metrics-server-85b7d694d7-6kvbb
	7292e19e493f5       6e38f40d628db                                                                                                                                7 minutes ago       Running             storage-provisioner                      1                   1360822b0ef5c       storage-provisioner
	8cc199db322c9       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       7 minutes ago       Running             local-path-provisioner                   0                   aae77681f23ba       local-path-provisioner-648f6765c9-fg6hp
	88b9438083846       kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89                                         7 minutes ago       Running             minikube-ingress-dns                     0                   33ea5a9962762       kube-ingress-dns-minikube
	66aca914f06c0       gcr.io/k8s-minikube/kube-registry-proxy@sha256:f832bbe1d48c62de040bd793937eaa0c05d2f945a55376a99c80a4dd9961aeb1                              7 minutes ago       Running             registry-proxy                           0                   e1ac1f8211a25       registry-proxy-b2bpz
	f7ba5d4d4b944       registry@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d                                                             7 minutes ago       Running             registry                                 0                   5e0c280a3cca8       registry-66898fdd98-9545s
	3840db368fa91       gcr.io/cloud-spanner-emulator/emulator@sha256:15030dbba87c4fba50265cc80e62278eb41925d24d3a54c30563eff06304bf58                               7 minutes ago       Running             cloud-spanner-emulator                   0                   55c14d8df4114       cloud-spanner-emulator-85f6b7fc65-4djrt
	5d4060a49ec40       rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                               7 minutes ago       Running             amd-gpu-device-plugin                    0                   8aaadd76e64b3       amd-gpu-device-plugin-5pr2g
	acea706acb68d       nvcr.io/nvidia/k8s-device-plugin@sha256:630596340f8e83aa10b0bc13a46db76772e31b7dccfc34d3a4e41ab7e0aa6117                                     7 minutes ago       Running             nvidia-device-plugin-ctr                 0                   7029b34fcc600       nvidia-device-plugin-daemonset-rv5nz
	910e07bc5455e       6e38f40d628db                                                                                                                                7 minutes ago       Exited              storage-provisioner                      0                   1360822b0ef5c       storage-provisioner
	c491d217e6a98       52546a367cc9e                                                                                                                                7 minutes ago       Running             coredns                                  0                   5f95340f57eb0       coredns-66bc5c9577-5qvcx
	7f229dd48a2c3       df0860106674d                                                                                                                                7 minutes ago       Running             kube-proxy                               0                   6014edb3b0f91       kube-proxy-kxjqx
	cc27cd95c2f3e       90550c43ad2bc                                                                                                                                7 minutes ago       Running             kube-apiserver                           0                   2e57097224cc0       kube-apiserver-addons-529420
	ea3b2c765f2ae       a0af72f2ec6d6                                                                                                                                7 minutes ago       Running             kube-controller-manager                  0                   ce79a900c656a       kube-controller-manager-addons-529420
	7f38046322a2d       46169d968e920                                                                                                                                7 minutes ago       Running             kube-scheduler                           0                   74b0641932447       kube-scheduler-addons-529420
	3a3152518f29b       5f1f5298c888d                                                                                                                                7 minutes ago       Running             etcd                                     0                   099566320ee0a       etcd-addons-529420
	
	
	==> controller_ingress [53f450233c95] <==
	W0929 10:38:04.433632       7 client_config.go:667] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0929 10:38:04.433806       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0929 10:38:04.439997       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="34" git="v1.34.0" state="clean" commit="f28b4c9efbca5c5c0af716d9f2d5702667ee8a45" platform="linux/amd64"
	I0929 10:38:04.662690       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0929 10:38:04.672931       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0929 10:38:04.680412       7 nginx.go:273] "Starting NGINX Ingress controller"
	I0929 10:38:04.685586       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"b1bba541-1838-45bd-a21a-fb3a52158121", APIVersion:"v1", ResourceVersion:"625", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0929 10:38:04.686863       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"47e79a0f-52ba-4094-b6bf-cff5bc2f5782", APIVersion:"v1", ResourceVersion:"631", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0929 10:38:04.686885       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"7d8260c8-2dc2-42de-8475-3174f0fbcebe", APIVersion:"v1", ResourceVersion:"632", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0929 10:38:05.882258       7 nginx.go:319] "Starting NGINX process"
	I0929 10:38:05.882452       7 leaderelection.go:257] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0929 10:38:05.882700       7 nginx.go:339] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0929 10:38:05.883222       7 controller.go:214] "Configuration changes detected, backend reload required"
	I0929 10:38:05.888729       7 leaderelection.go:271] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0929 10:38:05.888761       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-9cc49f96f-pxng2"
	I0929 10:38:05.892640       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-9cc49f96f-pxng2" node="addons-529420"
	I0929 10:38:05.926797       7 controller.go:228] "Backend successfully reloaded"
	I0929 10:38:05.926911       7 controller.go:240] "Initial sync, sleeping for 1 second"
	I0929 10:38:05.926976       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-9cc49f96f-pxng2", UID:"587b0efc-8a0d-48e9-8f76-f6c72ed2a5f2", APIVersion:"v1", ResourceVersion:"704", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I0929 10:38:05.993545       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-9cc49f96f-pxng2" node="addons-529420"
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.27.1
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [c491d217e6a9] <==
	[INFO] 10.244.0.7:55332 - 62171 "A IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.003001279s
	[INFO] 10.244.0.7:50415 - 39816 "A IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000090632s
	[INFO] 10.244.0.7:50415 - 40142 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000140179s
	[INFO] 10.244.0.7:51631 - 16547 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000077812s
	[INFO] 10.244.0.7:51631 - 16342 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000121622s
	[INFO] 10.244.0.7:54120 - 55745 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000074894s
	[INFO] 10.244.0.7:54120 - 55427 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.00012144s
	[INFO] 10.244.0.7:44276 - 57260 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000156752s
	[INFO] 10.244.0.7:44276 - 57057 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00019348s
	[INFO] 10.244.0.25:60127 - 41853 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000357536s
	[INFO] 10.244.0.25:49489 - 60455 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000420887s
	[INFO] 10.244.0.25:56444 - 64863 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000123075s
	[INFO] 10.244.0.25:47991 - 45219 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000219459s
	[INFO] 10.244.0.25:41073 - 37798 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000132747s
	[INFO] 10.244.0.25:56023 - 28022 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00018111s
	[INFO] 10.244.0.25:34207 - 60772 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003985247s
	[INFO] 10.244.0.25:34528 - 32480 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.00679976s
	[INFO] 10.244.0.25:57103 - 8380 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.006696106s
	[INFO] 10.244.0.25:58770 - 56546 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.011895734s
	[INFO] 10.244.0.25:46155 - 33827 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004624849s
	[INFO] 10.244.0.25:33222 - 31647 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.008860505s
	[INFO] 10.244.0.25:50865 - 49473 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.0056946s
	[INFO] 10.244.0.25:47833 - 61181 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006586754s
	[INFO] 10.244.0.25:57597 - 20971 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 344 0.000971648s
	[INFO] 10.244.0.25:51886 - 57600 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00104711s
	
	
	==> describe nodes <==
	Name:               addons-529420
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-529420
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c1f958e1d15faaa2b94ae7399d1155627e45fcf8
	                    minikube.k8s.io/name=addons-529420
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T10_36_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-529420
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-529420"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 10:36:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-529420
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 10:44:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 10:41:55 +0000   Mon, 29 Sep 2025 10:36:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 10:41:55 +0000   Mon, 29 Sep 2025 10:36:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 10:41:55 +0000   Mon, 29 Sep 2025 10:36:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 10:41:55 +0000   Mon, 29 Sep 2025 10:36:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-529420
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 3a82221623174cb89aca2e3330c89f2e
	  System UUID:                a576baa7-93e3-43ba-ad25-9862293a3344
	  Boot ID:                    2fed06dd-7ca1-49fc-8cb4-296c71642eea
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-85f6b7fc65-4djrt     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m45s
	  gadget                      gadget-8bvtc                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m44s
	  gcp-auth                    gcp-auth-78565c9fb4-r7k7m                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m37s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-pxng2    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         7m43s
	  kube-system                 amd-gpu-device-plugin-5pr2g                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m45s
	  kube-system                 coredns-66bc5c9577-5qvcx                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m45s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m42s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m42s
	  kube-system                 csi-hostpathplugin-z7562                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m42s
	  kube-system                 etcd-addons-529420                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m51s
	  kube-system                 kube-apiserver-addons-529420                250m (3%)     0 (0%)      0 (0%)           0 (0%)         7m51s
	  kube-system                 kube-controller-manager-addons-529420       200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m51s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m44s
	  kube-system                 kube-proxy-kxjqx                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m45s
	  kube-system                 kube-scheduler-addons-529420                100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m51s
	  kube-system                 metrics-server-85b7d694d7-6kvbb             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         7m44s
	  kube-system                 nvidia-device-plugin-daemonset-rv5nz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m45s
	  kube-system                 registry-66898fdd98-9545s                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m45s
	  kube-system                 registry-creds-764b6fb674-ssj2n             0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m45s
	  kube-system                 registry-proxy-b2bpz                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m44s
	  kube-system                 snapshot-controller-7d9fbc56b8-bxfh2        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m43s
	  kube-system                 snapshot-controller-7d9fbc56b8-pwx9p        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m43s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m44s
	  local-path-storage          local-path-provisioner-648f6765c9-fg6hp     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m44s
	  volcano-system              volcano-admission-6c447bd768-njmt4          0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m44s
	  volcano-system              volcano-admission-init-d2jxg                0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m43s
	  volcano-system              volcano-controllers-6fd4f85cb8-bqgx5        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m43s
	  volcano-system              volcano-scheduler-76c996c8bf-kbgrj          0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m43s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-5thg8              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     7m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  0 (0%)
	  memory             588Mi (1%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m43s                  kube-proxy       
	  Normal  Starting                 7m55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m55s (x8 over 7m55s)  kubelet          Node addons-529420 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m55s (x8 over 7m55s)  kubelet          Node addons-529420 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m55s (x7 over 7m55s)  kubelet          Node addons-529420 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m51s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m51s                  kubelet          Node addons-529420 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m51s                  kubelet          Node addons-529420 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m51s                  kubelet          Node addons-529420 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m46s                  node-controller  Node addons-529420 event: Registered Node addons-529420 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 86 d0 8d 6b 0d 1b 08 06
	[  +0.583720] IPv4: martian source 10.244.0.7 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 79 85 72 eb a1 08 06
	[  +0.025782] IPv4: martian source 10.244.0.7 from 10.244.0.6, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 48 7b 6f c2 51 08 06
	[  +1.972009] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 60 55 0a b3 db 08 06
	[  +0.024494] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 22 ec 8f 40 19 66 08 06
	[  +2.802739] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 72 ba 15 15 c7 ec 08 06
	[  +0.014450] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 9a 38 c6 be 73 e8 08 06
	[  +2.936645] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 02 df 88 08 78 89 08 06
	[  +0.326891] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 b3 ea 3d 15 d3 08 06
	[  +0.292071] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 79 3e 6e 1c 85 08 06
	[Sep29 10:38] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 fd 85 4d 34 7c 08 06
	[  +6.578912] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0a 8d 25 f9 d3 f7 08 06
	[  +0.000514] IPv4: martian source 10.244.0.25 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 79 85 72 eb a1 08 06
	
	
	==> etcd [3a3152518f29] <==
	{"level":"warn","ts":"2025-09-29T10:36:45.858719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:45.865714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:45.874122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:45.881854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:45.888653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:45.902856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:45.911144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:45.918112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:45.975495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:57.777500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36938","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T10:37:03.895377Z","caller":"traceutil/trace.go:172","msg":"trace[858034751] transaction","detail":"{read_only:false; response_revision:1015; number_of_response:1; }","duration":"128.948339ms","start":"2025-09-29T10:37:03.766408Z","end":"2025-09-29T10:37:03.895357Z","steps":["trace[858034751] 'process raft request'  (duration: 128.809516ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:37:04.039951Z","caller":"traceutil/trace.go:172","msg":"trace[799810606] transaction","detail":"{read_only:false; response_revision:1016; number_of_response:1; }","duration":"139.742546ms","start":"2025-09-29T10:37:03.900189Z","end":"2025-09-29T10:37:04.039932Z","steps":["trace[799810606] 'process raft request'  (duration: 139.624536ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T10:37:23.432233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.465158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.491685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.505493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.513523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.523591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.578347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.609684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.635915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.643629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.654366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.666394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55308","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T10:38:03.955142Z","caller":"traceutil/trace.go:172","msg":"trace[1495019545] transaction","detail":"{read_only:false; response_revision:1324; number_of_response:1; }","duration":"128.921298ms","start":"2025-09-29T10:38:03.826200Z","end":"2025-09-29T10:38:03.955121Z","steps":["trace[1495019545] 'process raft request'  (duration: 102.9549ms)","trace[1495019545] 'compare'  (duration: 25.853332ms)"],"step_count":2}
	
	
	==> gcp-auth [869afa433675] <==
	2025/09/29 10:38:11 GCP Auth Webhook started!
	
	
	==> kernel <==
	 10:44:40 up  2:27,  0 users,  load average: 0.04, 0.45, 0.85
	Linux addons-529420 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [cc27cd95c2f3] <==
	E0929 10:37:41.180010       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 10:37:41.180036       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0929 10:37:41.180032       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 10:37:41.181167       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 10:37:45.188764       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 10:37:45.188810       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.223.152:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.223.152:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	E0929 10:37:45.188826       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 10:37:45.200947       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0929 10:37:46.662233       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:38:15.371669       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:38:49.295254       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:39:21.925543       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:40:11.955679       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:40:43.631147       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:41:23.295727       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:42:10.363344       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:42:49.407893       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:43:30.218962       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:44:15.827850       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [ea3b2c765f2a] <==
	I0929 10:36:53.412171       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 10:36:53.412319       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0929 10:36:53.412404       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0929 10:36:53.412550       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0929 10:36:53.412790       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0929 10:36:53.412849       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0929 10:36:53.415924       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0929 10:36:53.417107       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0929 10:36:53.420528       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 10:36:53.424222       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0929 10:36:53.432998       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0929 10:36:55.463698       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E0929 10:37:23.426616       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 10:37:23.426799       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch.volcano.sh"
	I0929 10:37:23.426851       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0929 10:37:23.426899       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="commands.bus.volcano.sh"
	I0929 10:37:23.426924       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podgroups.scheduling.volcano.sh"
	I0929 10:37:23.426955       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobflows.flow.volcano.sh"
	I0929 10:37:23.426989       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch.volcano.sh"
	I0929 10:37:23.427023       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobtemplates.flow.volcano.sh"
	I0929 10:37:23.427116       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0929 10:37:23.442473       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0929 10:37:23.447709       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0929 10:37:24.628069       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 10:37:24.748472       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [7f229dd48a2c] <==
	I0929 10:36:55.962965       1 server_linux.go:53] "Using iptables proxy"
	I0929 10:36:56.114754       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 10:36:56.216582       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 10:36:56.217225       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 10:36:56.218058       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 10:36:56.276653       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 10:36:56.277056       1 server_linux.go:132] "Using iptables Proxier"
	I0929 10:36:56.332280       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 10:36:56.332751       1 server.go:527] "Version info" version="v1.34.0"
	I0929 10:36:56.332790       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:36:56.334977       1 config.go:200] "Starting service config controller"
	I0929 10:36:56.335003       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 10:36:56.335633       1 config.go:309] "Starting node config controller"
	I0929 10:36:56.335645       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 10:36:56.335652       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 10:36:56.335877       1 config.go:106] "Starting endpoint slice config controller"
	I0929 10:36:56.335887       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 10:36:56.335924       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 10:36:56.335932       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 10:36:56.435153       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 10:36:56.436458       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 10:36:56.436501       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7f38046322a2] <==
	I0929 10:36:46.773756       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0929 10:36:46.774579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 10:36:46.774618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0929 10:36:46.774676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 10:36:46.775018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 10:36:46.775287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 10:36:46.775438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 10:36:46.775611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 10:36:46.775732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 10:36:46.775732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 10:36:46.775766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 10:36:46.776166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 10:36:46.776381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 10:36:46.776394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 10:36:46.776523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 10:36:46.776523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 10:36:46.776556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 10:36:46.776598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 10:36:46.776614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 10:36:46.776631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 10:36:47.607010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 10:36:47.607036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 10:36:47.611460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 10:36:47.673666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I0929 10:36:47.973483       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 10:43:30 addons-529420 kubelet[2313]: E0929 10:43:30.715132    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-5thg8" podUID="23d68edd-cead-477c-97b6-f1a7f8b03cbd"
	Sep 29 10:43:31 addons-529420 kubelet[2313]: E0929 10:43:31.715235    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[admission-certs], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="volcano-system/volcano-admission-6c447bd768-njmt4" podUID="9eeaba50-5d99-4dd6-8777-ae6db2ba128c"
	Sep 29 10:43:31 addons-529420 kubelet[2313]: E0929 10:43:31.715247    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-ssj2n" podUID="a5aecf48-b091-49c9-8743-a9d358c2fcc4"
	Sep 29 10:43:37 addons-529420 kubelet[2313]: E0929 10:43:37.713776    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-webhook-manager:v1.13.0@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-admission-init-d2jxg" podUID="eefd0711-f41e-4a8f-85ff-ae0705bc0108"
	Sep 29 10:43:39 addons-529420 kubelet[2313]: E0929 10:43:39.713542    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.13.0@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-6fd4f85cb8-bqgx5" podUID="f337fbfa-ba45-4de4-9168-dc4d55ad6378"
	Sep 29 10:43:39 addons-529420 kubelet[2313]: E0929 10:43:39.713596    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-76c996c8bf-kbgrj" podUID="6a525f26-530a-4629-9a2c-fd4fb60b1c94"
	Sep 29 10:43:42 addons-529420 kubelet[2313]: E0929 10:43:42.715469    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-5thg8" podUID="23d68edd-cead-477c-97b6-f1a7f8b03cbd"
	Sep 29 10:43:44 addons-529420 kubelet[2313]: I0929 10:43:44.713621    2313 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-5qvcx" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 10:43:50 addons-529420 kubelet[2313]: E0929 10:43:50.713707    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-webhook-manager:v1.13.0@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-admission-init-d2jxg" podUID="eefd0711-f41e-4a8f-85ff-ae0705bc0108"
	Sep 29 10:43:53 addons-529420 kubelet[2313]: E0929 10:43:53.715117    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-5thg8" podUID="23d68edd-cead-477c-97b6-f1a7f8b03cbd"
	Sep 29 10:43:54 addons-529420 kubelet[2313]: E0929 10:43:54.713602    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-76c996c8bf-kbgrj" podUID="6a525f26-530a-4629-9a2c-fd4fb60b1c94"
	Sep 29 10:43:54 addons-529420 kubelet[2313]: E0929 10:43:54.713603    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.13.0@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-6fd4f85cb8-bqgx5" podUID="f337fbfa-ba45-4de4-9168-dc4d55ad6378"
	Sep 29 10:44:01 addons-529420 kubelet[2313]: I0929 10:44:01.713190    2313 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-b2bpz" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 10:44:01 addons-529420 kubelet[2313]: E0929 10:44:01.713520    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-webhook-manager:v1.13.0@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-admission-init-d2jxg" podUID="eefd0711-f41e-4a8f-85ff-ae0705bc0108"
	Sep 29 10:44:06 addons-529420 kubelet[2313]: E0929 10:44:06.715203    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-5thg8" podUID="23d68edd-cead-477c-97b6-f1a7f8b03cbd"
	Sep 29 10:44:07 addons-529420 kubelet[2313]: E0929 10:44:07.713593    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-76c996c8bf-kbgrj" podUID="6a525f26-530a-4629-9a2c-fd4fb60b1c94"
	Sep 29 10:44:09 addons-529420 kubelet[2313]: E0929 10:44:09.713544    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.13.0@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-6fd4f85cb8-bqgx5" podUID="f337fbfa-ba45-4de4-9168-dc4d55ad6378"
	Sep 29 10:44:14 addons-529420 kubelet[2313]: E0929 10:44:14.713744    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-webhook-manager:v1.13.0@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-admission-init-d2jxg" podUID="eefd0711-f41e-4a8f-85ff-ae0705bc0108"
	Sep 29 10:44:19 addons-529420 kubelet[2313]: E0929 10:44:19.713256    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-76c996c8bf-kbgrj" podUID="6a525f26-530a-4629-9a2c-fd4fb60b1c94"
	Sep 29 10:44:19 addons-529420 kubelet[2313]: E0929 10:44:19.715156    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-5thg8" podUID="23d68edd-cead-477c-97b6-f1a7f8b03cbd"
	Sep 29 10:44:24 addons-529420 kubelet[2313]: E0929 10:44:24.713016    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.13.0@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-6fd4f85cb8-bqgx5" podUID="f337fbfa-ba45-4de4-9168-dc4d55ad6378"
	Sep 29 10:44:29 addons-529420 kubelet[2313]: E0929 10:44:29.713826    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-webhook-manager:v1.13.0@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-admission-init-d2jxg" podUID="eefd0711-f41e-4a8f-85ff-ae0705bc0108"
	Sep 29 10:44:30 addons-529420 kubelet[2313]: E0929 10:44:30.715701    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-5thg8" podUID="23d68edd-cead-477c-97b6-f1a7f8b03cbd"
	Sep 29 10:44:33 addons-529420 kubelet[2313]: E0929 10:44:33.712847    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-76c996c8bf-kbgrj" podUID="6a525f26-530a-4629-9a2c-fd4fb60b1c94"
	Sep 29 10:44:38 addons-529420 kubelet[2313]: E0929 10:44:38.715085    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.13.0@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-6fd4f85cb8-bqgx5" podUID="f337fbfa-ba45-4de4-9168-dc4d55ad6378"
	
	
	==> storage-provisioner [7292e19e493f] <==
	W0929 10:44:14.864205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:44:16.867782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:44:16.873465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:44:18.877329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:44:18.882446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:44:20.885991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:44:20.891142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:44:22.894648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:44:22.899884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:44:24.903610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:44:24.908045       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:44:26.911282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:44:26.915955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:44:28.919109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:44:28.923544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:44:30.927299       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:44:30.935255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:44:32.938516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:44:32.943705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:44:34.946588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:44:34.950892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:44:36.954285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:44:36.958748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:44:38.963099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:44:38.967776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [910e07bc5455] <==
	I0929 10:36:56.614357       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 10:37:26.626869       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-529420 -n addons-529420
helpers_test.go:269: (dbg) Run:  kubectl --context addons-529420 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-tj9td ingress-nginx-admission-patch-ncp6w registry-creds-764b6fb674-ssj2n volcano-admission-6c447bd768-njmt4 volcano-admission-init-d2jxg volcano-controllers-6fd4f85cb8-bqgx5 volcano-scheduler-76c996c8bf-kbgrj yakd-dashboard-5ff678cb9-5thg8
helpers_test.go:282: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-529420 describe pod ingress-nginx-admission-create-tj9td ingress-nginx-admission-patch-ncp6w registry-creds-764b6fb674-ssj2n volcano-admission-6c447bd768-njmt4 volcano-admission-init-d2jxg volcano-controllers-6fd4f85cb8-bqgx5 volcano-scheduler-76c996c8bf-kbgrj yakd-dashboard-5ff678cb9-5thg8
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-529420 describe pod ingress-nginx-admission-create-tj9td ingress-nginx-admission-patch-ncp6w registry-creds-764b6fb674-ssj2n volcano-admission-6c447bd768-njmt4 volcano-admission-init-d2jxg volcano-controllers-6fd4f85cb8-bqgx5 volcano-scheduler-76c996c8bf-kbgrj yakd-dashboard-5ff678cb9-5thg8: exit status 1 (73.647363ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-tj9td" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-ncp6w" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-ssj2n" not found
	Error from server (NotFound): pods "volcano-admission-6c447bd768-njmt4" not found
	Error from server (NotFound): pods "volcano-admission-init-d2jxg" not found
	Error from server (NotFound): pods "volcano-controllers-6fd4f85cb8-bqgx5" not found
	Error from server (NotFound): pods "volcano-scheduler-76c996c8bf-kbgrj" not found
	Error from server (NotFound): pods "yakd-dashboard-5ff678cb9-5thg8" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-529420 describe pod ingress-nginx-admission-create-tj9td ingress-nginx-admission-patch-ncp6w registry-creds-764b6fb674-ssj2n volcano-admission-6c447bd768-njmt4 volcano-admission-init-d2jxg volcano-controllers-6fd4f85cb8-bqgx5 volcano-scheduler-76c996c8bf-kbgrj yakd-dashboard-5ff678cb9-5thg8: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-529420 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-529420 addons disable volcano --alsologtostderr -v=1: (11.762603788s)
--- FAIL: TestAddons/serial/Volcano (374.29s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (491.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-529420 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-529420 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-529420 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [f831007e-4eb0-4f0f-b585-d846fca29f0a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:337: TestAddons/parallel/Ingress: WARNING: pod list for "default" "run=nginx" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:252: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:252: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-529420 -n addons-529420
addons_test.go:252: TestAddons/parallel/Ingress: showing logs for failed pods as of 2025-09-29 10:53:38.001503384 +0000 UTC m=+1071.837422204
addons_test.go:252: (dbg) Run:  kubectl --context addons-529420 describe po nginx -n default
addons_test.go:252: (dbg) kubectl --context addons-529420 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-529420/192.168.49.2
Start Time:       Mon, 29 Sep 2025 10:45:37 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.31
IPs:
IP:  10.244.0.31
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nbt6z (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-nbt6z:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  8m1s                    default-scheduler  Successfully assigned default/nginx to addons-529420
Normal   Pulling    5m6s (x5 over 8m)       kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     5m5s (x5 over 7m59s)    kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     5m5s (x5 over 7m59s)    kubelet            Error: ErrImagePull
Normal   BackOff    2m46s (x21 over 7m59s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     2m46s (x21 over 7m59s)  kubelet            Error: ImagePullBackOff
addons_test.go:252: (dbg) Run:  kubectl --context addons-529420 logs nginx -n default
addons_test.go:252: (dbg) Non-zero exit: kubectl --context addons-529420 logs nginx -n default: exit status 1 (72.938255ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:252: kubectl --context addons-529420 logs nginx -n default: exit status 1
addons_test.go:253: failed waiting for nginx pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-529420
helpers_test.go:243: (dbg) docker inspect addons-529420:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c119ca8ea69036edc3cadb6923f83c5ce0ad278d98abd07121e74c6603626b70",
	        "Created": "2025-09-29T10:36:32.534326886Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 809726,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T10:36:32.572013527Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/c119ca8ea69036edc3cadb6923f83c5ce0ad278d98abd07121e74c6603626b70/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c119ca8ea69036edc3cadb6923f83c5ce0ad278d98abd07121e74c6603626b70/hostname",
	        "HostsPath": "/var/lib/docker/containers/c119ca8ea69036edc3cadb6923f83c5ce0ad278d98abd07121e74c6603626b70/hosts",
	        "LogPath": "/var/lib/docker/containers/c119ca8ea69036edc3cadb6923f83c5ce0ad278d98abd07121e74c6603626b70/c119ca8ea69036edc3cadb6923f83c5ce0ad278d98abd07121e74c6603626b70-json.log",
	        "Name": "/addons-529420",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-529420:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-529420",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c119ca8ea69036edc3cadb6923f83c5ce0ad278d98abd07121e74c6603626b70",
	                "LowerDir": "/var/lib/docker/overlay2/6a6aa4b945b92dc7997627af83b36f77536ae206c022f4b5596a3201308a5d21-init/diff:/var/lib/docker/overlay2/8785ead96546661033f04e32798b1ac8b184b1c04fc8862515b87d56909b6b9d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6a6aa4b945b92dc7997627af83b36f77536ae206c022f4b5596a3201308a5d21/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6a6aa4b945b92dc7997627af83b36f77536ae206c022f4b5596a3201308a5d21/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6a6aa4b945b92dc7997627af83b36f77536ae206c022f4b5596a3201308a5d21/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-529420",
	                "Source": "/var/lib/docker/volumes/addons-529420/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-529420",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-529420",
	                "name.minikube.sigs.k8s.io": "addons-529420",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "198f81b793e83c57ca2854cf8d1d0464141313e0495f23d0367891bbb8b05aa2",
	            "SandboxKey": "/var/run/docker/netns/198f81b793e8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33519"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33520"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33523"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33521"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33522"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-529420": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:86:fe:b5:4c:80",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3724749a501df880fce098abbef40c1fc5e7d4b0ca633d7b7b33237babda66c6",
	                    "EndpointID": "381be5dcde6efd6127536cd89de924bde7f3b532aa3b84038bdaf13a91eea4d0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-529420",
	                        "c119ca8ea690"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-529420 -n addons-529420
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-529420 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-529420 logs -n 25: (1.001600253s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                    ARGS                                                                                                                                                                                                                                    │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-239254                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-239254   │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │ 29 Sep 25 10:36 UTC │
	│ start   │ --download-only -p download-docker-537454 --alsologtostderr --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-537454 │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │                     │
	│ delete  │ -p download-docker-537454                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-docker-537454 │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │ 29 Sep 25 10:36 UTC │
	│ start   │ --download-only -p binary-mirror-606832 --alsologtostderr --binary-mirror http://127.0.0.1:41679 --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                               │ binary-mirror-606832   │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │                     │
	│ delete  │ -p binary-mirror-606832                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ binary-mirror-606832   │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │ 29 Sep 25 10:36 UTC │
	│ addons  │ enable dashboard -p addons-529420                                                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │                     │
	│ addons  │ disable dashboard -p addons-529420                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │                     │
	│ start   │ -p addons-529420 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │ 29 Sep 25 10:38 UTC │
	│ addons  │ addons-529420 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:44 UTC │ 29 Sep 25 10:44 UTC │
	│ addons  │ addons-529420 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ addons  │ enable headlamp -p addons-529420 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ addons  │ addons-529420 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ addons  │ addons-529420 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ addons  │ addons-529420 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ ip      │ addons-529420 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ addons  │ addons-529420 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ addons  │ addons-529420 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ addons  │ addons-529420 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ addons  │ addons-529420 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:47 UTC │ 29 Sep 25 10:47 UTC │
	│ addons  │ addons-529420 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:47 UTC │ 29 Sep 25 10:47 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-529420                                                                                                                                                                                                                                                                                                                                                                                             │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:47 UTC │ 29 Sep 25 10:47 UTC │
	│ addons  │ addons-529420 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:47 UTC │ 29 Sep 25 10:47 UTC │
	│ addons  │ addons-529420 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                            │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:50 UTC │ 29 Sep 25 10:51 UTC │
	│ addons  │ addons-529420 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:51 UTC │ 29 Sep 25 10:51 UTC │
	│ addons  │ addons-529420 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:51 UTC │ 29 Sep 25 10:51 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 10:36:12
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 10:36:12.344032  809094 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:36:12.344307  809094 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:36:12.344317  809094 out.go:374] Setting ErrFile to fd 2...
	I0929 10:36:12.344322  809094 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:36:12.344556  809094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-804126/.minikube/bin
	I0929 10:36:12.345145  809094 out.go:368] Setting JSON to false
	I0929 10:36:12.346064  809094 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":8313,"bootTime":1759133859,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:36:12.346178  809094 start.go:140] virtualization: kvm guest
	I0929 10:36:12.347964  809094 out.go:179] * [addons-529420] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 10:36:12.349153  809094 out.go:179]   - MINIKUBE_LOCATION=21656
	I0929 10:36:12.349158  809094 notify.go:220] Checking for updates...
	I0929 10:36:12.351412  809094 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:36:12.352567  809094 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21656-804126/kubeconfig
	I0929 10:36:12.353660  809094 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-804126/.minikube
	I0929 10:36:12.354776  809094 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 10:36:12.356142  809094 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 10:36:12.357739  809094 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:36:12.381211  809094 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 10:36:12.381322  809094 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:36:12.437520  809094 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-29 10:36:12.425260973 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:36:12.437676  809094 docker.go:318] overlay module found
	I0929 10:36:12.439238  809094 out.go:179] * Using the docker driver based on user configuration
	I0929 10:36:12.440251  809094 start.go:304] selected driver: docker
	I0929 10:36:12.440267  809094 start.go:924] validating driver "docker" against <nil>
	I0929 10:36:12.440280  809094 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 10:36:12.440926  809094 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:36:12.500513  809094 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-29 10:36:12.490511662 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:36:12.500726  809094 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 10:36:12.500930  809094 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 10:36:12.502519  809094 out.go:179] * Using Docker driver with root privileges
	I0929 10:36:12.503551  809094 cni.go:84] Creating CNI manager for ""
	I0929 10:36:12.503623  809094 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 10:36:12.503635  809094 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0929 10:36:12.503733  809094 start.go:348] cluster config:
	{Name:addons-529420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-529420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netwo
rkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1
m0s}
	I0929 10:36:12.504920  809094 out.go:179] * Starting "addons-529420" primary control-plane node in "addons-529420" cluster
	I0929 10:36:12.505901  809094 cache.go:123] Beginning downloading kic base image for docker with docker
	I0929 10:36:12.506980  809094 out.go:179] * Pulling base image v0.0.48 ...
	I0929 10:36:12.508072  809094 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 10:36:12.508109  809094 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 10:36:12.508127  809094 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21656-804126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0929 10:36:12.508138  809094 cache.go:58] Caching tarball of preloaded images
	I0929 10:36:12.508233  809094 preload.go:172] Found /home/jenkins/minikube-integration/21656-804126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0929 10:36:12.508247  809094 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0929 10:36:12.508686  809094 profile.go:143] Saving config to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/config.json ...
	I0929 10:36:12.508722  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/config.json: {Name:mkf2bdb1c1bab8c96c543722d15e10cff077f479 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:12.525703  809094 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 10:36:12.525838  809094 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0929 10:36:12.525859  809094 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0929 10:36:12.525863  809094 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0929 10:36:12.525870  809094 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0929 10:36:12.525876  809094 cache.go:165] Loading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from local cache
	I0929 10:36:25.658111  809094 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from cached tarball
	I0929 10:36:25.658161  809094 cache.go:232] Successfully downloaded all kic artifacts
	I0929 10:36:25.658213  809094 start.go:360] acquireMachinesLock for addons-529420: {Name:mkb1d52da565bdfba9b5d34f2b7f2b7e0a10b5f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 10:36:25.658335  809094 start.go:364] duration metric: took 99.814µs to acquireMachinesLock for "addons-529420"
	I0929 10:36:25.658362  809094 start.go:93] Provisioning new machine with config: &{Name:addons-529420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-529420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 10:36:25.658452  809094 start.go:125] createHost starting for "" (driver="docker")
	I0929 10:36:25.660487  809094 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0929 10:36:25.660770  809094 start.go:159] libmachine.API.Create for "addons-529420" (driver="docker")
	I0929 10:36:25.660815  809094 client.go:168] LocalClient.Create starting
	I0929 10:36:25.660956  809094 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca.pem
	I0929 10:36:25.875289  809094 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/cert.pem
	I0929 10:36:26.406689  809094 cli_runner.go:164] Run: docker network inspect addons-529420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0929 10:36:26.424628  809094 cli_runner.go:211] docker network inspect addons-529420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0929 10:36:26.424709  809094 network_create.go:284] running [docker network inspect addons-529420] to gather additional debugging logs...
	I0929 10:36:26.424730  809094 cli_runner.go:164] Run: docker network inspect addons-529420
	W0929 10:36:26.443146  809094 cli_runner.go:211] docker network inspect addons-529420 returned with exit code 1
	I0929 10:36:26.443184  809094 network_create.go:287] error running [docker network inspect addons-529420]: docker network inspect addons-529420: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-529420 not found
	I0929 10:36:26.443196  809094 network_create.go:289] output of [docker network inspect addons-529420]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-529420 not found
	
	** /stderr **
	I0929 10:36:26.443292  809094 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 10:36:26.462904  809094 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001bea4c0}
	I0929 10:36:26.462945  809094 network_create.go:124] attempt to create docker network addons-529420 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0929 10:36:26.462999  809094 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-529420 addons-529420
	I0929 10:36:26.525434  809094 network_create.go:108] docker network addons-529420 192.168.49.0/24 created
	I0929 10:36:26.525484  809094 kic.go:121] calculated static IP "192.168.49.2" for the "addons-529420" container
	I0929 10:36:26.525595  809094 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0929 10:36:26.543028  809094 cli_runner.go:164] Run: docker volume create addons-529420 --label name.minikube.sigs.k8s.io=addons-529420 --label created_by.minikube.sigs.k8s.io=true
	I0929 10:36:26.562375  809094 oci.go:103] Successfully created a docker volume addons-529420
	I0929 10:36:26.562495  809094 cli_runner.go:164] Run: docker run --rm --name addons-529420-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-529420 --entrypoint /usr/bin/test -v addons-529420:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0929 10:36:28.727983  809094 cli_runner.go:217] Completed: docker run --rm --name addons-529420-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-529420 --entrypoint /usr/bin/test -v addons-529420:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib: (2.165422344s)
	I0929 10:36:28.728035  809094 oci.go:107] Successfully prepared a docker volume addons-529420
	I0929 10:36:28.728077  809094 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 10:36:28.728109  809094 kic.go:194] Starting extracting preloaded images to volume ...
	I0929 10:36:28.728393  809094 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21656-804126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-529420:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0929 10:36:32.463008  809094 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21656-804126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-529420:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.734545448s)
	I0929 10:36:32.463043  809094 kic.go:203] duration metric: took 3.734930316s to extract preloaded images to volume ...
	W0929 10:36:32.463138  809094 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0929 10:36:32.463171  809094 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0929 10:36:32.463213  809094 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0929 10:36:32.517633  809094 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-529420 --name addons-529420 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-529420 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-529420 --network addons-529420 --ip 192.168.49.2 --volume addons-529420:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0929 10:36:32.782605  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Running}}
	I0929 10:36:32.802682  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:32.821845  809094 cli_runner.go:164] Run: docker exec addons-529420 stat /var/lib/dpkg/alternatives/iptables
	I0929 10:36:32.871577  809094 oci.go:144] the created container "addons-529420" has a running status.
	I0929 10:36:32.871612  809094 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa...
	I0929 10:36:33.119788  809094 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0929 10:36:33.151014  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:33.170919  809094 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0929 10:36:33.170942  809094 kic_runner.go:114] Args: [docker exec --privileged addons-529420 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0929 10:36:33.215532  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:33.236236  809094 machine.go:93] provisionDockerMachine start ...
	I0929 10:36:33.236361  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:33.258174  809094 main.go:141] libmachine: Using SSH client type: native
	I0929 10:36:33.258466  809094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33519 <nil> <nil>}
	I0929 10:36:33.258486  809094 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 10:36:33.399100  809094 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-529420
	
	I0929 10:36:33.399136  809094 ubuntu.go:182] provisioning hostname "addons-529420"
	I0929 10:36:33.399203  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:33.418829  809094 main.go:141] libmachine: Using SSH client type: native
	I0929 10:36:33.419101  809094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33519 <nil> <nil>}
	I0929 10:36:33.419124  809094 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-529420 && echo "addons-529420" | sudo tee /etc/hostname
	I0929 10:36:33.570695  809094 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-529420
	
	I0929 10:36:33.570813  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:33.590267  809094 main.go:141] libmachine: Using SSH client type: native
	I0929 10:36:33.590566  809094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33519 <nil> <nil>}
	I0929 10:36:33.590590  809094 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-529420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-529420/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-529420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 10:36:33.728831  809094 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 10:36:33.728862  809094 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21656-804126/.minikube CaCertPath:/home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21656-804126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21656-804126/.minikube}
	I0929 10:36:33.728905  809094 ubuntu.go:190] setting up certificates
	I0929 10:36:33.728918  809094 provision.go:84] configureAuth start
	I0929 10:36:33.728972  809094 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-529420
	I0929 10:36:33.748261  809094 provision.go:143] copyHostCerts
	I0929 10:36:33.748352  809094 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21656-804126/.minikube/ca.pem (1082 bytes)
	I0929 10:36:33.748507  809094 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21656-804126/.minikube/cert.pem (1123 bytes)
	I0929 10:36:33.748577  809094 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21656-804126/.minikube/key.pem (1679 bytes)
	I0929 10:36:33.748631  809094 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21656-804126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca-key.pem org=jenkins.addons-529420 san=[127.0.0.1 192.168.49.2 addons-529420 localhost minikube]
	I0929 10:36:33.869211  809094 provision.go:177] copyRemoteCerts
	I0929 10:36:33.869279  809094 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 10:36:33.869324  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:33.888791  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:33.988019  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 10:36:34.017500  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 10:36:34.045429  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 10:36:34.073342  809094 provision.go:87] duration metric: took 344.406928ms to configureAuth
	I0929 10:36:34.073378  809094 ubuntu.go:206] setting minikube options for container-runtime
	I0929 10:36:34.073606  809094 config.go:182] Loaded profile config "addons-529420": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 10:36:34.073667  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:34.093622  809094 main.go:141] libmachine: Using SSH client type: native
	I0929 10:36:34.093924  809094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33519 <nil> <nil>}
	I0929 10:36:34.093946  809094 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0929 10:36:34.233714  809094 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0929 10:36:34.233744  809094 ubuntu.go:71] root file system type: overlay
	I0929 10:36:34.233863  809094 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0929 10:36:34.233924  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:34.253864  809094 main.go:141] libmachine: Using SSH client type: native
	I0929 10:36:34.254082  809094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33519 <nil> <nil>}
	I0929 10:36:34.254160  809094 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0929 10:36:34.407267  809094 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0929 10:36:34.407373  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:34.426330  809094 main.go:141] libmachine: Using SSH client type: native
	I0929 10:36:34.426664  809094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33519 <nil> <nil>}
	I0929 10:36:34.426694  809094 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0929 10:36:35.600389  809094 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-29 10:36:34.405290761 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0929 10:36:35.600442  809094 machine.go:96] duration metric: took 2.364149235s to provisionDockerMachine
	I0929 10:36:35.600461  809094 client.go:171] duration metric: took 9.939636569s to LocalClient.Create
	I0929 10:36:35.600487  809094 start.go:167] duration metric: took 9.939718481s to libmachine.API.Create "addons-529420"
	I0929 10:36:35.600498  809094 start.go:293] postStartSetup for "addons-529420" (driver="docker")
	I0929 10:36:35.600507  809094 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 10:36:35.600574  809094 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 10:36:35.600613  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:35.620381  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:35.722159  809094 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 10:36:35.726032  809094 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 10:36:35.726071  809094 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 10:36:35.726079  809094 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 10:36:35.726086  809094 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 10:36:35.726099  809094 filesync.go:126] Scanning /home/jenkins/minikube-integration/21656-804126/.minikube/addons for local assets ...
	I0929 10:36:35.726162  809094 filesync.go:126] Scanning /home/jenkins/minikube-integration/21656-804126/.minikube/files for local assets ...
	I0929 10:36:35.726189  809094 start.go:296] duration metric: took 125.685891ms for postStartSetup
	I0929 10:36:35.726542  809094 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-529420
	I0929 10:36:35.746842  809094 profile.go:143] Saving config to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/config.json ...
	I0929 10:36:35.747139  809094 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 10:36:35.747190  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:35.766493  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:35.861001  809094 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 10:36:35.865987  809094 start.go:128] duration metric: took 10.207507252s to createHost
	I0929 10:36:35.866014  809094 start.go:83] releasing machines lock for "addons-529420", held for 10.207665879s
	I0929 10:36:35.866077  809094 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-529420
	I0929 10:36:35.884982  809094 ssh_runner.go:195] Run: cat /version.json
	I0929 10:36:35.885044  809094 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 10:36:35.885050  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:35.885102  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:35.904087  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:35.905861  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:36.079122  809094 ssh_runner.go:195] Run: systemctl --version
	I0929 10:36:36.084194  809094 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 10:36:36.089268  809094 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 10:36:36.123533  809094 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 10:36:36.123613  809094 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 10:36:36.154267  809094 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0929 10:36:36.154304  809094 start.go:495] detecting cgroup driver to use...
	I0929 10:36:36.154338  809094 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 10:36:36.154522  809094 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 10:36:36.173730  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 10:36:36.186991  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 10:36:36.198504  809094 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0929 10:36:36.198583  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0929 10:36:36.210007  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 10:36:36.221306  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 10:36:36.232714  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 10:36:36.244397  809094 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 10:36:36.255307  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 10:36:36.266876  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 10:36:36.278201  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 10:36:36.289714  809094 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 10:36:36.299414  809094 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 10:36:36.309736  809094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:36:36.379690  809094 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 10:36:36.458212  809094 start.go:495] detecting cgroup driver to use...
	I0929 10:36:36.458261  809094 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 10:36:36.458313  809094 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0929 10:36:36.471829  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 10:36:36.484463  809094 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 10:36:36.501567  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 10:36:36.514414  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 10:36:36.527777  809094 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 10:36:36.546772  809094 ssh_runner.go:195] Run: which cri-dockerd
	I0929 10:36:36.550960  809094 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0929 10:36:36.562724  809094 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0929 10:36:36.582736  809094 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0929 10:36:36.653862  809094 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0929 10:36:36.724043  809094 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0929 10:36:36.724204  809094 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0929 10:36:36.744794  809094 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 10:36:36.757465  809094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:36:36.826884  809094 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 10:36:37.610007  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 10:36:37.622898  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0929 10:36:37.636301  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 10:36:37.649738  809094 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0929 10:36:37.722666  809094 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0929 10:36:37.796505  809094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:36:37.867566  809094 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0929 10:36:37.892266  809094 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0929 10:36:37.904989  809094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:36:37.972029  809094 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0929 10:36:38.051455  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 10:36:38.065182  809094 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0929 10:36:38.065259  809094 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0929 10:36:38.069600  809094 start.go:563] Will wait 60s for crictl version
	I0929 10:36:38.069675  809094 ssh_runner.go:195] Run: which crictl
	I0929 10:36:38.073569  809094 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 10:36:38.110640  809094 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0929 10:36:38.110713  809094 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 10:36:38.140525  809094 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 10:36:38.168486  809094 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0929 10:36:38.168581  809094 cli_runner.go:164] Run: docker network inspect addons-529420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 10:36:38.186777  809094 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0929 10:36:38.191260  809094 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 10:36:38.203969  809094 kubeadm.go:875] updating cluster {Name:addons-529420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-529420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 10:36:38.204104  809094 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 10:36:38.204155  809094 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 10:36:38.227008  809094 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0929 10:36:38.227036  809094 docker.go:621] Images already preloaded, skipping extraction
	I0929 10:36:38.227106  809094 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 10:36:38.249601  809094 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0929 10:36:38.249632  809094 cache_images.go:85] Images are preloaded, skipping loading
	I0929 10:36:38.249646  809094 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0929 10:36:38.249774  809094 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-529420 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-529420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 10:36:38.249838  809094 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0929 10:36:38.304936  809094 cni.go:84] Creating CNI manager for ""
	I0929 10:36:38.304974  809094 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 10:36:38.305005  809094 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 10:36:38.305052  809094 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-529420 NodeName:addons-529420 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 10:36:38.305228  809094 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-529420"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 10:36:38.305304  809094 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 10:36:38.315730  809094 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 10:36:38.315813  809094 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 10:36:38.326318  809094 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0929 10:36:38.346540  809094 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 10:36:38.366500  809094 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I0929 10:36:38.386461  809094 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0929 10:36:38.390498  809094 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 10:36:38.403052  809094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:36:38.473834  809094 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 10:36:38.499163  809094 certs.go:68] Setting up /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420 for IP: 192.168.49.2
	I0929 10:36:38.499185  809094 certs.go:194] generating shared ca certs ...
	I0929 10:36:38.499202  809094 certs.go:226] acquiring lock for ca certs: {Name:mkc1a6b9be22ca2c6acbf4ae075c8649cd7bb029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:38.499334  809094 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21656-804126/.minikube/ca.key
	I0929 10:36:38.646613  809094 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-804126/.minikube/ca.crt ...
	I0929 10:36:38.646650  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/ca.crt: {Name:mk6af424fad818aae2ecb63097cb3cf4fa072fda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:38.646838  809094 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-804126/.minikube/ca.key ...
	I0929 10:36:38.646852  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/ca.key: {Name:mk82f55830669e727af6d6e5b82afa28223d781a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:38.646926  809094 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21656-804126/.minikube/proxy-client-ca.key
	I0929 10:36:38.721965  809094 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-804126/.minikube/proxy-client-ca.crt ...
	I0929 10:36:38.721996  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/proxy-client-ca.crt: {Name:mkfb52e92ed776fc971f1dc2b44df4c3ddc05e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:38.722184  809094 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-804126/.minikube/proxy-client-ca.key ...
	I0929 10:36:38.722197  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/proxy-client-ca.key: {Name:mk1e237f25fee0cc4ff615a96b95c3d6470c2cbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:38.722265  809094 certs.go:256] generating profile certs ...
	I0929 10:36:38.722327  809094 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.key
	I0929 10:36:38.722341  809094 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt with IP's: []
	I0929 10:36:39.380758  809094 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt ...
	I0929 10:36:39.380790  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt: {Name:mkc77ef14ddf2986f76245d42c12726cef377eb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:39.380976  809094 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.key ...
	I0929 10:36:39.380988  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.key: {Name:mke6b630fea371c2903c210193a2d24b7bb1f06f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:39.381077  809094 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.key.5ad07b27
	I0929 10:36:39.381099  809094 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.crt.5ad07b27 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0929 10:36:39.520731  809094 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.crt.5ad07b27 ...
	I0929 10:36:39.520767  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.crt.5ad07b27: {Name:mkcca52b9df73c7117f5aaacf4f623dd6c9d43c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:39.520941  809094 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.key.5ad07b27 ...
	I0929 10:36:39.520956  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.key.5ad07b27: {Name:mk26b5e71bc5d928280847a07112275ef7de7abe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:39.521026  809094 certs.go:381] copying /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.crt.5ad07b27 -> /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.crt
	I0929 10:36:39.521153  809094 certs.go:385] copying /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.key.5ad07b27 -> /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.key
	I0929 10:36:39.521216  809094 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.key
	I0929 10:36:39.521237  809094 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.crt with IP's: []
	I0929 10:36:39.588037  809094 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.crt ...
	I0929 10:36:39.588071  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.crt: {Name:mkf924850f5da9a86daf75692dc66662efd3fc9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:39.588249  809094 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.key ...
	I0929 10:36:39.588264  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.key: {Name:mkecb516723c573c2dd57316eed6cd1d6263ba08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:39.588475  809094 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 10:36:39.588513  809094 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca.pem (1082 bytes)
	I0929 10:36:39.588537  809094 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/cert.pem (1123 bytes)
	I0929 10:36:39.588557  809094 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/key.pem (1679 bytes)
	I0929 10:36:39.589229  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 10:36:39.617084  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0929 10:36:39.644094  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 10:36:39.671391  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 10:36:39.698770  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0929 10:36:39.726395  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 10:36:39.753719  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 10:36:39.781288  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 10:36:39.808573  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 10:36:39.838955  809094 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 10:36:39.859027  809094 ssh_runner.go:195] Run: openssl version
	I0929 10:36:39.865345  809094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 10:36:39.879241  809094 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:36:39.883476  809094 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:36:39.883550  809094 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:36:39.891125  809094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 10:36:39.902246  809094 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 10:36:39.906800  809094 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 10:36:39.906853  809094 kubeadm.go:392] StartCluster: {Name:addons-529420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-529420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketV
MnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:36:39.907037  809094 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0929 10:36:39.927225  809094 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 10:36:39.937579  809094 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 10:36:39.948182  809094 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0929 10:36:39.948260  809094 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 10:36:39.958474  809094 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 10:36:39.958499  809094 kubeadm.go:157] found existing configuration files:
	
	I0929 10:36:39.958547  809094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 10:36:39.968562  809094 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 10:36:39.968615  809094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 10:36:39.978358  809094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 10:36:39.988499  809094 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 10:36:39.988557  809094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 10:36:39.998306  809094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 10:36:40.008316  809094 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 10:36:40.008394  809094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 10:36:40.018692  809094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 10:36:40.028624  809094 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 10:36:40.028694  809094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 10:36:40.038485  809094 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0929 10:36:40.113436  809094 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1040-gcp\n", err: exit status 1
	I0929 10:36:40.172082  809094 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 10:36:49.487259  809094 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0929 10:36:49.487349  809094 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 10:36:49.487474  809094 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0929 10:36:49.487549  809094 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1040-gcp
	I0929 10:36:49.487599  809094 kubeadm.go:310] OS: Linux
	I0929 10:36:49.487674  809094 kubeadm.go:310] CGROUPS_CPU: enabled
	I0929 10:36:49.487738  809094 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0929 10:36:49.487808  809094 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0929 10:36:49.487877  809094 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0929 10:36:49.487945  809094 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0929 10:36:49.488009  809094 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0929 10:36:49.488084  809094 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0929 10:36:49.488155  809094 kubeadm.go:310] CGROUPS_IO: enabled
	I0929 10:36:49.488236  809094 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 10:36:49.488334  809094 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 10:36:49.488489  809094 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 10:36:49.488588  809094 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0929 10:36:49.490162  809094 out.go:252]   - Generating certificates and keys ...
	I0929 10:36:49.490252  809094 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 10:36:49.490322  809094 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 10:36:49.490390  809094 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 10:36:49.490463  809094 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 10:36:49.490546  809094 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 10:36:49.490627  809094 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 10:36:49.490693  809094 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 10:36:49.490785  809094 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-529420 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 10:36:49.490833  809094 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 10:36:49.490923  809094 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-529420 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 10:36:49.490993  809094 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 10:36:49.491078  809094 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 10:36:49.491125  809094 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 10:36:49.491174  809094 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 10:36:49.491224  809094 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 10:36:49.491279  809094 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 10:36:49.491356  809094 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 10:36:49.491494  809094 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 10:36:49.491545  809094 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 10:36:49.491621  809094 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 10:36:49.491692  809094 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 10:36:49.493671  809094 out.go:252]   - Booting up control plane ...
	I0929 10:36:49.493764  809094 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 10:36:49.493839  809094 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 10:36:49.493913  809094 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 10:36:49.494005  809094 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 10:36:49.494093  809094 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 10:36:49.494209  809094 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 10:36:49.494311  809094 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 10:36:49.494360  809094 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 10:36:49.494507  809094 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 10:36:49.494615  809094 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0929 10:36:49.494666  809094 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001039286s
	I0929 10:36:49.494753  809094 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 10:36:49.494843  809094 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0929 10:36:49.494970  809094 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 10:36:49.495066  809094 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 10:36:49.495136  809094 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.704142157s
	I0929 10:36:49.495206  809094 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.204406716s
	I0929 10:36:49.495290  809094 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 3.502382535s
	I0929 10:36:49.495410  809094 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 10:36:49.495564  809094 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 10:36:49.495618  809094 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 10:36:49.495799  809094 kubeadm.go:310] [mark-control-plane] Marking the node addons-529420 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 10:36:49.495870  809094 kubeadm.go:310] [bootstrap-token] Using token: yuv8vz.ch07v7kt1fq2e5wb
	I0929 10:36:49.498139  809094 out.go:252]   - Configuring RBAC rules ...
	I0929 10:36:49.498247  809094 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 10:36:49.498343  809094 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 10:36:49.498483  809094 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 10:36:49.498597  809094 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 10:36:49.498726  809094 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 10:36:49.498844  809094 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 10:36:49.498986  809094 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 10:36:49.499027  809094 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 10:36:49.499068  809094 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 10:36:49.499073  809094 kubeadm.go:310] 
	I0929 10:36:49.499124  809094 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 10:36:49.499129  809094 kubeadm.go:310] 
	I0929 10:36:49.499199  809094 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 10:36:49.499205  809094 kubeadm.go:310] 
	I0929 10:36:49.499230  809094 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 10:36:49.499295  809094 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 10:36:49.499340  809094 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 10:36:49.499345  809094 kubeadm.go:310] 
	I0929 10:36:49.499394  809094 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 10:36:49.499400  809094 kubeadm.go:310] 
	I0929 10:36:49.499496  809094 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 10:36:49.499503  809094 kubeadm.go:310] 
	I0929 10:36:49.499547  809094 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 10:36:49.499632  809094 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 10:36:49.499694  809094 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 10:36:49.499700  809094 kubeadm.go:310] 
	I0929 10:36:49.499768  809094 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 10:36:49.499834  809094 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 10:36:49.499840  809094 kubeadm.go:310] 
	I0929 10:36:49.499917  809094 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token yuv8vz.ch07v7kt1fq2e5wb \
	I0929 10:36:49.500007  809094 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:52ed2ed4898a1f91b8e8d0663d1b9110ba480fd365338f5c4892399f9ccfb123 \
	I0929 10:36:49.500027  809094 kubeadm.go:310] 	--control-plane 
	I0929 10:36:49.500033  809094 kubeadm.go:310] 
	I0929 10:36:49.500105  809094 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 10:36:49.500110  809094 kubeadm.go:310] 
	I0929 10:36:49.500183  809094 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token yuv8vz.ch07v7kt1fq2e5wb \
	I0929 10:36:49.500291  809094 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:52ed2ed4898a1f91b8e8d0663d1b9110ba480fd365338f5c4892399f9ccfb123 
	I0929 10:36:49.500305  809094 cni.go:84] Creating CNI manager for ""
	I0929 10:36:49.500322  809094 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 10:36:49.501607  809094 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0929 10:36:49.502541  809094 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0929 10:36:49.513883  809094 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0929 10:36:49.534107  809094 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 10:36:49.534178  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:49.534217  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-529420 minikube.k8s.io/updated_at=2025_09_29T10_36_49_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c1f958e1d15faaa2b94ae7399d1155627e45fcf8 minikube.k8s.io/name=addons-529420 minikube.k8s.io/primary=true
	I0929 10:36:49.634858  809094 ops.go:34] apiserver oom_adj: -16
	I0929 10:36:49.634879  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:50.135279  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:50.635199  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:51.135670  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:51.635820  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:52.135653  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:52.635900  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:53.135655  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:53.635560  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:53.702327  809094 kubeadm.go:1105] duration metric: took 4.168204298s to wait for elevateKubeSystemPrivileges
	I0929 10:36:53.702374  809094 kubeadm.go:394] duration metric: took 13.795525537s to StartCluster
	I0929 10:36:53.702401  809094 settings.go:142] acquiring lock: {Name:mkba08796c8ac17f1f94d6f1a6799aa6725155e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:53.702549  809094 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21656-804126/kubeconfig
	I0929 10:36:53.703073  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/kubeconfig: {Name:mkae1c4acd5fb2ec6cdb06d7e9593ac36dd4ccbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:53.703279  809094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 10:36:53.703300  809094 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 10:36:53.703359  809094 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0929 10:36:53.703546  809094 addons.go:69] Setting ingress=true in profile "addons-529420"
	I0929 10:36:53.703557  809094 addons.go:69] Setting metrics-server=true in profile "addons-529420"
	I0929 10:36:53.703575  809094 addons.go:69] Setting volcano=true in profile "addons-529420"
	I0929 10:36:53.703577  809094 addons.go:238] Setting addon ingress=true in "addons-529420"
	I0929 10:36:53.703583  809094 addons.go:238] Setting addon metrics-server=true in "addons-529420"
	I0929 10:36:53.703600  809094 addons.go:238] Setting addon volcano=true in "addons-529420"
	I0929 10:36:53.703611  809094 config.go:182] Loaded profile config "addons-529420": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 10:36:53.703623  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703615  809094 addons.go:69] Setting storage-provisioner=true in profile "addons-529420"
	I0929 10:36:53.703640  809094 addons.go:69] Setting inspektor-gadget=true in profile "addons-529420"
	I0929 10:36:53.703649  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703650  809094 addons.go:238] Setting addon inspektor-gadget=true in "addons-529420"
	I0929 10:36:53.703670  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703670  809094 addons.go:69] Setting default-storageclass=true in profile "addons-529420"
	I0929 10:36:53.703673  809094 addons.go:238] Setting addon storage-provisioner=true in "addons-529420"
	I0929 10:36:53.703685  809094 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-529420"
	I0929 10:36:53.703665  809094 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-529420"
	I0929 10:36:53.703742  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703752  809094 addons.go:69] Setting registry=true in profile "addons-529420"
	I0929 10:36:53.703784  809094 addons.go:238] Setting addon registry=true in "addons-529420"
	I0929 10:36:53.703797  809094 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-529420"
	I0929 10:36:53.703816  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703839  809094 addons.go:69] Setting registry-creds=true in profile "addons-529420"
	I0929 10:36:53.703853  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703864  809094 addons.go:238] Setting addon registry-creds=true in "addons-529420"
	I0929 10:36:53.703888  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.704091  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704229  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704236  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704241  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704269  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704280  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704369  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704383  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704822  809094 addons.go:69] Setting gcp-auth=true in profile "addons-529420"
	I0929 10:36:53.704853  809094 mustload.go:65] Loading cluster: addons-529420
	I0929 10:36:53.704946  809094 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-529420"
	I0929 10:36:53.704967  809094 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-529420"
	I0929 10:36:53.705010  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.705063  809094 config.go:182] Loaded profile config "addons-529420": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 10:36:53.705314  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.705514  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.705877  809094 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-529420"
	I0929 10:36:53.705901  809094 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-529420"
	I0929 10:36:53.705927  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.706142  809094 out.go:179] * Verifying Kubernetes components...
	I0929 10:36:53.706320  809094 addons.go:69] Setting volumesnapshots=true in profile "addons-529420"
	I0929 10:36:53.706369  809094 addons.go:238] Setting addon volumesnapshots=true in "addons-529420"
	I0929 10:36:53.706411  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703632  809094 addons.go:69] Setting ingress-dns=true in profile "addons-529420"
	I0929 10:36:53.706720  809094 addons.go:238] Setting addon ingress-dns=true in "addons-529420"
	I0929 10:36:53.706754  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.707040  809094 addons.go:69] Setting cloud-spanner=true in profile "addons-529420"
	I0929 10:36:53.707069  809094 addons.go:238] Setting addon cloud-spanner=true in "addons-529420"
	I0929 10:36:53.707096  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703568  809094 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-529420"
	I0929 10:36:53.707458  809094 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-529420"
	I0929 10:36:53.707552  809094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:36:53.703557  809094 addons.go:69] Setting yakd=true in profile "addons-529420"
	I0929 10:36:53.707960  809094 addons.go:238] Setting addon yakd=true in "addons-529420"
	I0929 10:36:53.707999  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703623  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.714143  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.714652  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.719013  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.719122  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.719970  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.722035  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.723786  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.778757  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.780491  809094 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0929 10:36:53.781884  809094 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0929 10:36:53.781909  809094 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0929 10:36:53.782011  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.783509  809094 addons.go:238] Setting addon default-storageclass=true in "addons-529420"
	I0929 10:36:53.783669  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.786763  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.807336  809094 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 10:36:53.807376  809094 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0929 10:36:53.807346  809094 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 10:36:53.807406  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0929 10:36:53.808907  809094 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0929 10:36:53.809965  809094 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-529420"
	I0929 10:36:53.810002  809094 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 10:36:53.810022  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 10:36:53.810026  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.810093  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.809085  809094 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0929 10:36:53.809123  809094 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0929 10:36:53.810358  809094 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0929 10:36:53.810445  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.810589  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.811657  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0929 10:36:53.811713  809094 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0929 10:36:53.811728  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0929 10:36:53.811729  809094 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 10:36:53.811785  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.812811  809094 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 10:36:53.812830  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0929 10:36:53.812883  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.813727  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0929 10:36:53.814898  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0929 10:36:53.816283  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0929 10:36:53.817528  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0929 10:36:53.818644  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0929 10:36:53.820451  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0929 10:36:53.820547  809094 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0929 10:36:53.820676  809094 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0929 10:36:53.821594  809094 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0929 10:36:53.821612  809094 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0929 10:36:53.821686  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.822262  809094 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 10:36:53.823314  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0929 10:36:53.822343  809094 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 10:36:53.823758  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0929 10:36:53.823888  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.824274  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.826866  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0929 10:36:53.828202  809094 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0929 10:36:53.828229  809094 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0929 10:36:53.828293  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.831468  809094 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0929 10:36:53.831558  809094 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0929 10:36:53.831581  809094 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I0929 10:36:53.831884  809094 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0929 10:36:53.835051  809094 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 10:36:53.835084  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0929 10:36:53.835153  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.836300  809094 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 10:36:53.836321  809094 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 10:36:53.836386  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.836883  809094 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I0929 10:36:53.837031  809094 out.go:179]   - Using image docker.io/registry:3.0.0
	I0929 10:36:53.838461  809094 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0929 10:36:53.838481  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0929 10:36:53.838563  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.839840  809094 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I0929 10:36:53.846729  809094 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0929 10:36:53.847497  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I0929 10:36:53.847543  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.847569  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.862719  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.868937  809094 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0929 10:36:53.870360  809094 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 10:36:53.870448  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0929 10:36:53.870559  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.876206  809094 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 10:36:53.876242  809094 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 10:36:53.879516  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.881669  809094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 10:36:53.897463  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.897986  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.902073  809094 out.go:179]   - Using image docker.io/busybox:stable
	I0929 10:36:53.907027  809094 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0929 10:36:53.908323  809094 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 10:36:53.908508  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0929 10:36:53.908601  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.912847  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.916620  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.916689  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.918495  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.924984  809094 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 10:36:53.927315  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.928318  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.929791  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.931522  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.931855  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.932316  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	W0929 10:36:53.933912  809094 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0929 10:36:53.933975  809094 retry.go:31] will retry after 222.37842ms: ssh: handshake failed: EOF
	I0929 10:36:53.947593  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.951956  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:54.017495  809094 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0929 10:36:54.017528  809094 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0929 10:36:54.037524  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0929 10:36:54.047797  809094 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0929 10:36:54.047824  809094 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0929 10:36:54.088850  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 10:36:54.102982  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 10:36:54.104728  809094 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0929 10:36:54.104759  809094 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0929 10:36:54.105824  809094 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:36:54.105847  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0929 10:36:54.113997  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 10:36:54.115206  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 10:36:54.115595  809094 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0929 10:36:54.115658  809094 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0929 10:36:54.115954  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 10:36:54.131289  809094 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0929 10:36:54.131324  809094 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0929 10:36:54.141402  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 10:36:54.141685  809094 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0929 10:36:54.141711  809094 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0929 10:36:54.146347  809094 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 10:36:54.146375  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0929 10:36:54.148830  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0929 10:36:54.158233  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 10:36:54.187390  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:36:54.199350  809094 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0929 10:36:54.199378  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0929 10:36:54.219151  809094 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0929 10:36:54.219235  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0929 10:36:54.228377  809094 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 10:36:54.228502  809094 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 10:36:54.236642  809094 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0929 10:36:54.236676  809094 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0929 10:36:54.249985  809094 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0929 10:36:54.250018  809094 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0929 10:36:54.257015  809094 node_ready.go:35] waiting up to 6m0s for node "addons-529420" to be "Ready" ...
	I0929 10:36:54.257465  809094 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0929 10:36:54.260181  809094 node_ready.go:49] node "addons-529420" is "Ready"
	I0929 10:36:54.260213  809094 node_ready.go:38] duration metric: took 3.156144ms for node "addons-529420" to be "Ready" ...
	I0929 10:36:54.260251  809094 api_server.go:52] waiting for apiserver process to appear ...
	I0929 10:36:54.260304  809094 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 10:36:54.275549  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0929 10:36:54.288410  809094 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 10:36:54.288527  809094 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 10:36:54.309547  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0929 10:36:54.326392  809094 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0929 10:36:54.326462  809094 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0929 10:36:54.371852  809094 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0929 10:36:54.371884  809094 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0929 10:36:54.385903  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 10:36:54.406122  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 10:36:54.441278  809094 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0929 10:36:54.441321  809094 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0929 10:36:54.475876  809094 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0929 10:36:54.475975  809094 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0929 10:36:54.558603  809094 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 10:36:54.558634  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0929 10:36:54.567254  809094 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0929 10:36:54.567363  809094 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0929 10:36:54.637129  809094 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0929 10:36:54.637160  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0929 10:36:54.681921  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 10:36:54.728318  809094 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0929 10:36:54.728360  809094 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0929 10:36:54.766660  809094 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-529420" context rescaled to 1 replicas
	I0929 10:36:54.848450  809094 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0929 10:36:54.848663  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0929 10:36:54.946361  809094 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0929 10:36:54.946523  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0929 10:36:55.079437  809094 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 10:36:55.079471  809094 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0929 10:36:55.190890  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 10:36:55.493984  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.390947237s)
	I0929 10:36:55.494004  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.379973258s)
	I0929 10:36:55.494048  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.378812825s)
	I0929 10:36:56.183759  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.042240387s)
	I0929 10:36:56.184086  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (2.068081762s)
	I0929 10:36:56.184109  809094 addons.go:479] Verifying addon ingress=true in "addons-529420"
	I0929 10:36:56.185647  809094 out.go:179] * Verifying ingress addon...
	I0929 10:36:56.187598  809094 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0929 10:36:56.207003  809094 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0929 10:36:56.207034  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:56.705051  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:57.193166  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:57.222996  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.064717663s)
	I0929 10:36:57.223366  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.035944416s)
	W0929 10:36:57.223400  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:36:57.223437  809094 retry.go:31] will retry after 152.765294ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:36:57.223474  809094 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.963156736s)
	I0929 10:36:57.223491  809094 api_server.go:72] duration metric: took 3.520164967s to wait for apiserver process to appear ...
	I0929 10:36:57.223498  809094 api_server.go:88] waiting for apiserver healthz status ...
	I0929 10:36:57.223518  809094 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0929 10:36:57.223916  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (2.948332459s)
	I0929 10:36:57.223942  809094 addons.go:479] Verifying addon registry=true in "addons-529420"
	I0929 10:36:57.224224  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (2.914632573s)
	I0929 10:36:57.224522  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.838576934s)
	I0929 10:36:57.224572  809094 addons.go:479] Verifying addon metrics-server=true in "addons-529420"
	I0929 10:36:57.224580  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.818313312s)
	I0929 10:36:57.224688  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.542731559s)
	W0929 10:36:57.224716  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 10:36:57.224732  809094 retry.go:31] will retry after 151.241621ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 10:36:57.224940  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.034013059s)
	I0929 10:36:57.224964  809094 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-529420"
	I0929 10:36:57.225804  809094 out.go:179] * Verifying registry addon...
	I0929 10:36:57.225829  809094 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-529420 service yakd-dashboard -n yakd-dashboard
	
	I0929 10:36:57.227287  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.078403021s)
	I0929 10:36:57.228280  809094 out.go:179] * Verifying csi-hostpath-driver addon...
	I0929 10:36:57.229531  809094 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0929 10:36:57.230385  809094 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0929 10:36:57.236158  809094 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0929 10:36:57.238077  809094 api_server.go:141] control plane version: v1.34.0
	I0929 10:36:57.238109  809094 api_server.go:131] duration metric: took 14.602902ms to wait for apiserver health ...
	I0929 10:36:57.238122  809094 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 10:36:57.295055  809094 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 10:36:57.295086  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:36:57.297613  809094 system_pods.go:59] 19 kube-system pods found
	I0929 10:36:57.297672  809094 system_pods.go:61] "amd-gpu-device-plugin-5pr2g" [d246c61e-5b06-4a8a-b834-a8104ed29092] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 10:36:57.297684  809094 system_pods.go:61] "coredns-66bc5c9577-5qvcx" [431a8d9d-251c-4916-9c89-87d909588d39] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:36:57.297700  809094 system_pods.go:61] "csi-hostpath-attacher-0" [70a20e1e-6207-4b51-afa5-e013bc35656f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 10:36:57.297708  809094 system_pods.go:61] "csi-hostpath-resizer-0" [9de479e4-bad8-4f15-bfd7-f7f8a2dd12e6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 10:36:57.297716  809094 system_pods.go:61] "csi-hostpathplugin-z7562" [bb4e1eda-850e-4da2-ae44-f84511d3ee67] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 10:36:57.297725  809094 system_pods.go:61] "etcd-addons-529420" [1b81bfdd-919b-4cd8-a60e-22f79112660f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 10:36:57.297730  809094 system_pods.go:61] "kube-apiserver-addons-529420" [47505dbb-1863-4367-98f4-9f7ebb2c9415] Running
	I0929 10:36:57.297746  809094 system_pods.go:61] "kube-controller-manager-addons-529420" [37496a5f-d392-4a23-b1a5-1c7a43d653a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 10:36:57.297755  809094 system_pods.go:61] "kube-ingress-dns-minikube" [a3dd66da-3ebf-43f5-80b9-e99fc50dfcb6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 10:36:57.297760  809094 system_pods.go:61] "kube-proxy-kxjqx" [99197e14-7d79-403b-87f4-b1f17f52f9f5] Running
	I0929 10:36:57.297767  809094 system_pods.go:61] "kube-scheduler-addons-529420" [ebe74bc8-ca56-4474-99cf-5b6633e8a102] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 10:36:57.297774  809094 system_pods.go:61] "metrics-server-85b7d694d7-6kvbb" [2261b64a-3e18-4fca-8fb4-4a870b5d302f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 10:36:57.297781  809094 system_pods.go:61] "nvidia-device-plugin-daemonset-rv5nz" [70340435-f458-4a2a-a617-185f04893887] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 10:36:57.297791  809094 system_pods.go:61] "registry-66898fdd98-9545s" [6b827e39-0a4c-4539-b408-9bfd1cdda6c1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 10:36:57.297799  809094 system_pods.go:61] "registry-creds-764b6fb674-ssj2n" [a5aecf48-b091-49c9-8743-a9d358c2fcc4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 10:36:57.297807  809094 system_pods.go:61] "registry-proxy-b2bpz" [87ba9cf0-8f09-4423-b674-f77637763bcc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 10:36:57.297814  809094 system_pods.go:61] "snapshot-controller-7d9fbc56b8-bxfh2" [bb19aa00-633c-4f38-88f2-3cd29546aec9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:36:57.297826  809094 system_pods.go:61] "snapshot-controller-7d9fbc56b8-pwx9p" [5203fa8d-6a4b-4bc9-89be-9a58d780e11e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:36:57.297834  809094 system_pods.go:61] "storage-provisioner" [c791ec78-6e64-4b97-a928-e3f1e19e92c1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 10:36:57.297843  809094 system_pods.go:74] duration metric: took 59.712387ms to wait for pod list to return data ...
	I0929 10:36:57.297854  809094 default_sa.go:34] waiting for default service account to be created ...
	I0929 10:36:57.298160  809094 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 10:36:57.298172  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:36:57.305651  809094 default_sa.go:45] found service account: "default"
	I0929 10:36:57.305752  809094 default_sa.go:55] duration metric: took 7.889394ms for default service account to be created ...
	I0929 10:36:57.305777  809094 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 10:36:57.314079  809094 system_pods.go:86] 19 kube-system pods found
	I0929 10:36:57.314192  809094 system_pods.go:89] "amd-gpu-device-plugin-5pr2g" [d246c61e-5b06-4a8a-b834-a8104ed29092] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 10:36:57.314243  809094 system_pods.go:89] "coredns-66bc5c9577-5qvcx" [431a8d9d-251c-4916-9c89-87d909588d39] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:36:57.314277  809094 system_pods.go:89] "csi-hostpath-attacher-0" [70a20e1e-6207-4b51-afa5-e013bc35656f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 10:36:57.314298  809094 system_pods.go:89] "csi-hostpath-resizer-0" [9de479e4-bad8-4f15-bfd7-f7f8a2dd12e6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 10:36:57.314318  809094 system_pods.go:89] "csi-hostpathplugin-z7562" [bb4e1eda-850e-4da2-ae44-f84511d3ee67] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 10:36:57.314350  809094 system_pods.go:89] "etcd-addons-529420" [1b81bfdd-919b-4cd8-a60e-22f79112660f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 10:36:57.314386  809094 system_pods.go:89] "kube-apiserver-addons-529420" [47505dbb-1863-4367-98f4-9f7ebb2c9415] Running
	I0929 10:36:57.314408  809094 system_pods.go:89] "kube-controller-manager-addons-529420" [37496a5f-d392-4a23-b1a5-1c7a43d653a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 10:36:57.314453  809094 system_pods.go:89] "kube-ingress-dns-minikube" [a3dd66da-3ebf-43f5-80b9-e99fc50dfcb6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 10:36:57.314472  809094 system_pods.go:89] "kube-proxy-kxjqx" [99197e14-7d79-403b-87f4-b1f17f52f9f5] Running
	I0929 10:36:57.314489  809094 system_pods.go:89] "kube-scheduler-addons-529420" [ebe74bc8-ca56-4474-99cf-5b6633e8a102] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 10:36:57.314521  809094 system_pods.go:89] "metrics-server-85b7d694d7-6kvbb" [2261b64a-3e18-4fca-8fb4-4a870b5d302f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 10:36:57.314541  809094 system_pods.go:89] "nvidia-device-plugin-daemonset-rv5nz" [70340435-f458-4a2a-a617-185f04893887] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 10:36:57.314558  809094 system_pods.go:89] "registry-66898fdd98-9545s" [6b827e39-0a4c-4539-b408-9bfd1cdda6c1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 10:36:57.314575  809094 system_pods.go:89] "registry-creds-764b6fb674-ssj2n" [a5aecf48-b091-49c9-8743-a9d358c2fcc4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 10:36:57.314623  809094 system_pods.go:89] "registry-proxy-b2bpz" [87ba9cf0-8f09-4423-b674-f77637763bcc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 10:36:57.314641  809094 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bxfh2" [bb19aa00-633c-4f38-88f2-3cd29546aec9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:36:57.314661  809094 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pwx9p" [5203fa8d-6a4b-4bc9-89be-9a58d780e11e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:36:57.314703  809094 system_pods.go:89] "storage-provisioner" [c791ec78-6e64-4b97-a928-e3f1e19e92c1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 10:36:57.314741  809094 system_pods.go:126] duration metric: took 8.926144ms to wait for k8s-apps to be running ...
	I0929 10:36:57.314762  809094 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 10:36:57.314849  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 10:36:57.336098  809094 system_svc.go:56] duration metric: took 21.32375ms WaitForService to wait for kubelet
	I0929 10:36:57.336133  809094 kubeadm.go:578] duration metric: took 3.632804576s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 10:36:57.336158  809094 node_conditions.go:102] verifying NodePressure condition ...
	I0929 10:36:57.344338  809094 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 10:36:57.344379  809094 node_conditions.go:123] node cpu capacity is 8
	I0929 10:36:57.344396  809094 node_conditions.go:105] duration metric: took 8.231792ms to run NodePressure ...
	I0929 10:36:57.344410  809094 start.go:241] waiting for startup goroutines ...
	I0929 10:36:57.376412  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 10:36:57.376539  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:36:57.692404  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:57.793184  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:36:57.793318  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:36:58.191147  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:58.233460  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:36:58.233602  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:36:58.692356  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:58.733694  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:36:58.733886  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:36:59.194666  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:59.293196  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:36:59.293196  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:36:59.692198  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:59.792340  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:36:59.792540  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:00.126856  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.750359151s)
	I0929 10:37:00.126994  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.750412046s)
	W0929 10:37:00.127048  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:00.127077  809094 retry.go:31] will retry after 243.584374ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:00.191955  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:00.292723  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:00.292816  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:00.371587  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:00.692541  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:00.733843  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:00.734315  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:01.147379  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:01.147434  809094 retry.go:31] will retry after 313.436894ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:01.192318  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:01.194800  809094 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0929 10:37:01.194881  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:37:01.217609  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:37:01.269785  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:01.269897  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:01.337246  809094 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0929 10:37:01.363074  809094 addons.go:238] Setting addon gcp-auth=true in "addons-529420"
	I0929 10:37:01.363143  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:37:01.363713  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:37:01.386898  809094 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0929 10:37:01.386956  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:37:01.409699  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:37:01.461472  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:01.691916  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:01.733454  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:01.734086  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:02.191808  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 10:37:02.208498  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:02.208536  809094 retry.go:31] will retry after 1.150273014s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:02.211872  809094 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 10:37:02.213574  809094 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0929 10:37:02.214876  809094 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0929 10:37:02.214902  809094 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0929 10:37:02.234020  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:02.234162  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:02.241298  809094 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0929 10:37:02.241331  809094 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0929 10:37:02.266924  809094 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 10:37:02.266946  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0929 10:37:02.293874  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 10:37:02.684619  809094 addons.go:479] Verifying addon gcp-auth=true in "addons-529420"
	I0929 10:37:02.685986  809094 out.go:179] * Verifying gcp-auth addon...
	I0929 10:37:02.688185  809094 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0929 10:37:02.693394  809094 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0929 10:37:02.693428  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:02.693405  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:02.733763  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:02.733885  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:03.192364  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:03.192855  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:03.234316  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:03.234465  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:03.359547  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:03.692021  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:03.692117  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:03.733209  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:03.734038  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:04.191279  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:04.191358  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:04.233692  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:04.234492  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:04.287957  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:04.287997  809094 retry.go:31] will retry after 1.511178059s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:04.692167  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:04.692234  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:04.733131  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:04.734062  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:05.191715  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:05.191921  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:05.233860  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:05.234129  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:05.691361  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:05.691521  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:05.734135  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:05.734258  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:05.800129  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:06.191447  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:06.191552  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:06.233874  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:06.234076  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:06.411195  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:06.411250  809094 retry.go:31] will retry after 2.100064857s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:06.691798  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:06.692073  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:06.732827  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:06.733809  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:07.191508  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:07.191709  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:07.292363  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:07.292457  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:07.691546  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:07.691574  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:07.734030  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:07.734125  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:08.191781  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:08.191814  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:08.232967  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:08.233754  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:08.512112  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:08.691009  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:08.691510  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:08.733861  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:08.733938  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:09.104861  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:09.104897  809094 retry.go:31] will retry after 3.376157006s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:09.191367  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:09.191595  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:09.233494  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:09.233556  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:09.691727  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:09.691798  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:09.733640  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:09.733721  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:10.190922  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:10.191004  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:10.233082  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:10.234008  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:10.691979  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:10.692058  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:10.733220  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:10.733906  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:11.191409  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:11.191511  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:11.234068  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:11.234155  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:11.691573  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:11.691635  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:11.732866  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:11.733594  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:12.191979  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:12.192191  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:12.232888  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:12.233877  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:12.481250  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:12.691033  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:12.691374  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:12.733300  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:12.733527  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:13.054889  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:13.054935  809094 retry.go:31] will retry after 5.523481945s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:13.191285  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:13.191341  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:13.233717  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:13.233802  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:13.691877  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:13.691923  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:13.732931  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:13.733745  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:14.191616  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:14.191752  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:14.233768  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:14.233879  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:14.691856  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:14.691909  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:14.733917  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:14.733984  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:15.191840  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:15.191956  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:15.232857  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:15.233963  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:15.691580  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:15.691731  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:15.733913  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:15.735663  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:16.191071  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:16.191303  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:16.233820  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:16.233918  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:16.691314  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:16.691386  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:16.734056  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:16.734170  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:17.191402  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:17.191593  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:17.233380  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:17.233625  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:17.691493  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:17.691699  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:17.732648  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:17.733372  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:18.191911  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:18.192046  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:18.233634  809094 kapi.go:107] duration metric: took 21.004035787s to wait for kubernetes.io/minikube-addons=registry ...
	I0929 10:37:18.234214  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:18.578952  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:18.692732  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:18.693005  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:18.734571  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:19.192457  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:19.192698  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:19.236192  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:19.359657  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:19.359698  809094 retry.go:31] will retry after 5.374303415s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:19.691689  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:19.691870  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:19.734388  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:20.192362  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:20.192436  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:20.235035  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:20.691493  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:20.691559  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:20.734068  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:21.191686  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:21.191754  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:21.234043  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:21.691673  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:21.691816  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:21.734360  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:22.192209  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:22.192364  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:22.234765  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:22.691170  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:22.691296  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:22.734585  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:23.191838  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:23.191855  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:23.234192  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:23.691617  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:23.691827  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:23.734342  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:24.191851  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:24.191914  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:24.234052  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:24.691665  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:24.691766  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:24.734744  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:24.778008  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:25.191224  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:25.191245  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:25.233932  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:25.323222  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:25.323263  809094 retry.go:31] will retry after 4.946418269s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:25.692211  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:25.692261  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:25.793190  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:26.191796  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:26.192044  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:26.234744  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:26.691669  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:26.691730  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:26.792265  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:27.191948  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:27.192054  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:27.293582  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:27.691274  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:27.691324  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:27.734583  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:28.191896  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:28.192035  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:28.234579  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:28.691393  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:28.691468  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:28.745258  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:29.191857  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:29.192145  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:29.293661  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:29.692369  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:29.692409  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:29.734847  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:30.191284  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:30.191549  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:30.235247  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:30.270015  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:30.692450  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:30.692468  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:30.734382  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:30.874687  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:30.874717  809094 retry.go:31] will retry after 9.611404012s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:31.191034  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:31.191136  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:31.234596  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:31.691789  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:31.691851  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:31.734487  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:32.192228  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:32.192397  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:32.234895  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:32.691647  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:32.691826  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:32.734231  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:33.191791  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:33.191803  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:33.293681  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:33.691499  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:33.691575  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:33.733652  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:34.191347  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:34.191394  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:34.235018  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:34.692239  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:34.692288  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:34.734356  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:35.191945  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:35.192042  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:35.233802  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:35.691543  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:35.691708  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:35.733748  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:36.190728  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:36.190927  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:36.234247  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:36.691843  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:36.691958  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:36.734324  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:37.232504  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:37.232580  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:37.234832  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:37.691206  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:37.691326  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:37.734619  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:38.191595  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:38.191680  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:38.292586  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:38.692258  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:38.692407  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:38.734713  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:39.191465  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:39.191541  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:39.233459  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:39.692264  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:39.692361  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:39.736727  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:40.191041  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:40.191241  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:40.282297  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:40.487332  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:40.700928  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:40.700982  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:40.734342  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:41.111058  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:41.111096  809094 retry.go:31] will retry after 27.798318096s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:41.192192  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:41.192448  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:41.234734  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:41.691467  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:41.691559  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:41.738052  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:42.191281  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:42.191334  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:42.234363  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:42.692206  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:42.692364  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:42.735143  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:43.192059  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:43.192131  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:43.234581  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:43.691539  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:43.691592  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:43.734168  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:44.192399  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:44.192554  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:44.235151  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:44.712201  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:44.712256  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:44.814275  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:45.191818  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:45.193367  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:45.292640  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:45.692318  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:45.692523  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:45.734670  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:46.191229  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:46.191310  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:46.234266  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:46.691380  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:46.691528  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:46.735585  809094 kapi.go:107] duration metric: took 49.505194902s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0929 10:37:47.191025  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:47.191255  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:47.691635  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:47.691700  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:48.191562  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:48.191703  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:48.692227  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:48.692295  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:49.191449  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:49.191458  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:49.691391  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:49.691392  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:50.191310  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:50.191376  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:50.691312  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:50.691530  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:51.190897  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:51.191071  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:51.690962  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:51.690975  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:52.191596  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:52.191799  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:52.691821  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:52.691860  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:53.192013  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:53.192203  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:53.690815  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:53.691412  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:54.191531  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:54.191563  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:54.690735  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:54.691192  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:55.191955  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:55.192190  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:55.691501  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:55.691567  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:56.191135  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:56.191259  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:56.691523  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:56.691569  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:57.190892  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:57.191196  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:57.691389  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:57.691460  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:58.191486  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:58.191637  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:58.691468  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:58.691615  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:59.191501  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:59.191524  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:59.691453  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:59.691624  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:00.191269  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:00.191495  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:00.691597  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:00.691645  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:01.192840  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:01.193251  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:01.692293  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:01.692359  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:02.191486  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:02.191506  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:02.691685  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:02.691815  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:03.191877  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:03.191952  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:03.692663  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:03.692768  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:04.191650  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:04.191736  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:04.691141  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:04.691354  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:05.191572  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:05.191608  809094 kapi.go:107] duration metric: took 1m9.004009852s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0929 10:38:05.692502  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:06.192606  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:06.692131  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:07.191846  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:07.691923  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:08.192050  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:08.691556  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:08.910309  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:38:09.192274  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 10:38:09.505831  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:38:09.505869  809094 retry.go:31] will retry after 26.574045109s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:38:09.692484  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:10.191984  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:10.692204  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:11.191688  809094 kapi.go:107] duration metric: took 1m8.503599106s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0929 10:38:11.193489  809094 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-529420 cluster.
	I0929 10:38:11.194659  809094 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0929 10:38:11.195892  809094 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0929 10:38:36.081555  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0929 10:38:36.665106  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 10:38:36.665238  809094 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0929 10:38:36.667274  809094 out.go:179] * Enabled addons: amd-gpu-device-plugin, cloud-spanner, storage-provisioner, nvidia-device-plugin, default-storageclass, registry-creds, metrics-server, ingress-dns, volcano, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0929 10:38:36.668228  809094 addons.go:514] duration metric: took 1m42.964873486s for enable addons: enabled=[amd-gpu-device-plugin cloud-spanner storage-provisioner nvidia-device-plugin default-storageclass registry-creds metrics-server ingress-dns volcano yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0929 10:38:36.668278  809094 start.go:246] waiting for cluster config update ...
	I0929 10:38:36.668307  809094 start.go:255] writing updated cluster config ...
	I0929 10:38:36.668641  809094 ssh_runner.go:195] Run: rm -f paused
	I0929 10:38:36.672803  809094 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 10:38:36.676997  809094 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5qvcx" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:36.681719  809094 pod_ready.go:94] pod "coredns-66bc5c9577-5qvcx" is "Ready"
	I0929 10:38:36.681743  809094 pod_ready.go:86] duration metric: took 4.720124ms for pod "coredns-66bc5c9577-5qvcx" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:36.684044  809094 pod_ready.go:83] waiting for pod "etcd-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:36.688395  809094 pod_ready.go:94] pod "etcd-addons-529420" is "Ready"
	I0929 10:38:36.688443  809094 pod_ready.go:86] duration metric: took 4.375753ms for pod "etcd-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:36.690812  809094 pod_ready.go:83] waiting for pod "kube-apiserver-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:36.694897  809094 pod_ready.go:94] pod "kube-apiserver-addons-529420" is "Ready"
	I0929 10:38:36.694938  809094 pod_ready.go:86] duration metric: took 4.098109ms for pod "kube-apiserver-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:36.696911  809094 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:37.077390  809094 pod_ready.go:94] pod "kube-controller-manager-addons-529420" is "Ready"
	I0929 10:38:37.077434  809094 pod_ready.go:86] duration metric: took 380.496118ms for pod "kube-controller-manager-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:37.276699  809094 pod_ready.go:83] waiting for pod "kube-proxy-kxjqx" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:37.677151  809094 pod_ready.go:94] pod "kube-proxy-kxjqx" is "Ready"
	I0929 10:38:37.677180  809094 pod_ready.go:86] duration metric: took 400.453557ms for pod "kube-proxy-kxjqx" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:37.877980  809094 pod_ready.go:83] waiting for pod "kube-scheduler-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:38.277511  809094 pod_ready.go:94] pod "kube-scheduler-addons-529420" is "Ready"
	I0929 10:38:38.277542  809094 pod_ready.go:86] duration metric: took 399.519909ms for pod "kube-scheduler-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:38.277554  809094 pod_ready.go:40] duration metric: took 1.604719848s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 10:38:38.326458  809094 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 10:38:38.327948  809094 out.go:179] * Done! kubectl is now configured to use "addons-529420" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 29 10:51:03 addons-529420 cri-dockerd[1422]: time="2025-09-29T10:51:03Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"local-path-provisioner-648f6765c9-fg6hp_local-path-storage\": unexpected command output nsenter: cannot open /proc/3757/ns/net: No such file or directory\n with error: exit status 1"
	Sep 29 10:51:03 addons-529420 dockerd[1117]: time="2025-09-29T10:51:03.030646683Z" level=info msg="ignoring event" container=aae77681f23ba441420baf6d103140dd2f85604ccfb2122e8208c6413965dce0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 10:51:16 addons-529420 dockerd[1117]: time="2025-09-29T10:51:16.729246195Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:51:18 addons-529420 dockerd[1117]: time="2025-09-29T10:51:18.699355155Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:51:32 addons-529420 dockerd[1117]: time="2025-09-29T10:51:32.944720313Z" level=info msg="ignoring event" container=cb2fd8f0926448c938e07d8d939fb566c282a6042f8955201764da59148745a1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 10:51:32 addons-529420 dockerd[1117]: time="2025-09-29T10:51:32.944783786Z" level=info msg="ignoring event" container=ac3df2986f815463c815a6cd85a541ae2e65e8a957b964e831d3d96f837d2ba3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 10:51:33 addons-529420 dockerd[1117]: time="2025-09-29T10:51:33.107691932Z" level=info msg="ignoring event" container=2f35c5222a67606f82feacd62dbc59700812e1433aa5668963a32c7d64cf5562 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 10:51:33 addons-529420 dockerd[1117]: time="2025-09-29T10:51:33.129379018Z" level=info msg="ignoring event" container=ba080eb06cf32e5567cb3de2cad4c94a4b5ba77704b386a5aaced5ac501c6bde module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 10:51:33 addons-529420 dockerd[1117]: time="2025-09-29T10:51:33.830709426Z" level=info msg="ignoring event" container=7ed61e7756a579745f1d7412095ad44fe6390f668612433de6cd2b6a49c817e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 10:51:33 addons-529420 dockerd[1117]: time="2025-09-29T10:51:33.865101145Z" level=info msg="ignoring event" container=097ec25364497e74de8471f5318aa702f3c3929f897538ccabdc883dede4273f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 10:51:33 addons-529420 dockerd[1117]: time="2025-09-29T10:51:33.866471520Z" level=info msg="ignoring event" container=8ea1f8a967cfc36a6ede2031b1c66224c89bac9bf7705b95a3ab4069b9641616 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 10:51:33 addons-529420 dockerd[1117]: time="2025-09-29T10:51:33.884896439Z" level=info msg="ignoring event" container=4c9a460366a7b0fca774ada1a65f8e51f302a4a55df0cbaabd936fb850364a65 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 10:51:33 addons-529420 dockerd[1117]: time="2025-09-29T10:51:33.885998455Z" level=info msg="ignoring event" container=42c80806f344231cb3585c6a25dca27b5455d3f56a4da966683a2d4eeae9be8b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 10:51:33 addons-529420 dockerd[1117]: time="2025-09-29T10:51:33.888644128Z" level=info msg="ignoring event" container=8c797f52e7973a4380f09e58fcee25cd97850d99b5e0afe9c56fbb291d44df96 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 10:51:33 addons-529420 dockerd[1117]: time="2025-09-29T10:51:33.891127172Z" level=info msg="ignoring event" container=deb6678813fade2a30496f5b23739647356cb891a8df1829e38ae2e21ab3d0a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 10:51:33 addons-529420 dockerd[1117]: time="2025-09-29T10:51:33.892555212Z" level=info msg="ignoring event" container=b8bc330ded02ac3fedab88691975ec96da9466ba7dfae58977c5a6655bf886e7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 10:51:34 addons-529420 dockerd[1117]: time="2025-09-29T10:51:34.019986199Z" level=info msg="ignoring event" container=80edbd2ea7f0fe6b7209e72a66696491181408a8bdbf0a63a861d9365d18dee7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 10:51:34 addons-529420 dockerd[1117]: time="2025-09-29T10:51:34.046970576Z" level=info msg="ignoring event" container=f5028bed08a36629a2828897a194a8335824bf895c65c504b1cf178121ef303d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 10:51:34 addons-529420 dockerd[1117]: time="2025-09-29T10:51:34.072249312Z" level=info msg="ignoring event" container=ebcc1d0ccdc31ccc344a51a6534bb8d18dcae14ba2cee899dd13fee5819cf191 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 10:51:38 addons-529420 cri-dockerd[1422]: time="2025-09-29T10:51:38Z" level=error msg="error getting RW layer size for container ID 'cb2fd8f0926448c938e07d8d939fb566c282a6042f8955201764da59148745a1': Error response from daemon: No such container: cb2fd8f0926448c938e07d8d939fb566c282a6042f8955201764da59148745a1"
	Sep 29 10:51:38 addons-529420 cri-dockerd[1422]: time="2025-09-29T10:51:38Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'cb2fd8f0926448c938e07d8d939fb566c282a6042f8955201764da59148745a1'"
	Sep 29 10:51:38 addons-529420 cri-dockerd[1422]: time="2025-09-29T10:51:38Z" level=error msg="error getting RW layer size for container ID 'ac3df2986f815463c815a6cd85a541ae2e65e8a957b964e831d3d96f837d2ba3': Error response from daemon: No such container: ac3df2986f815463c815a6cd85a541ae2e65e8a957b964e831d3d96f837d2ba3"
	Sep 29 10:51:38 addons-529420 cri-dockerd[1422]: time="2025-09-29T10:51:38Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'ac3df2986f815463c815a6cd85a541ae2e65e8a957b964e831d3d96f837d2ba3'"
	Sep 29 10:51:38 addons-529420 cri-dockerd[1422]: time="2025-09-29T10:51:38Z" level=error msg="error getting RW layer size for container ID '7ed61e7756a579745f1d7412095ad44fe6390f668612433de6cd2b6a49c817e6': Error response from daemon: No such container: 7ed61e7756a579745f1d7412095ad44fe6390f668612433de6cd2b6a49c817e6"
	Sep 29 10:51:38 addons-529420 cri-dockerd[1422]: time="2025-09-29T10:51:38Z" level=error msg="Set backoffDuration to : 1m0s for container ID '7ed61e7756a579745f1d7412095ad44fe6390f668612433de6cd2b6a49c817e6'"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c663ddd040c70       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          8 minutes ago       Running             busybox                   0                   75cd2cc2ccb54       busybox
	53f450233c951       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             15 minutes ago      Running             controller                0                   7e7e5a36e33c6       ingress-nginx-controller-9cc49f96f-pxng2
	2575c5650e053       8c217da6734db                                                                                                                16 minutes ago      Exited              patch                     1                   8b7ddb7c14d4d       ingress-nginx-admission-patch-ncp6w
	788b28f41ae4a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   16 minutes ago      Exited              create                    0                   fe7563f8d917a       ingress-nginx-admission-create-tj9td
	866b95aa367a6       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5            16 minutes ago      Running             gadget                    0                   7ee25e2189efc       gadget-8bvtc
	7292e19e493f5       6e38f40d628db                                                                                                                16 minutes ago      Running             storage-provisioner       1                   1360822b0ef5c       storage-provisioner
	88b9438083846       kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89                         16 minutes ago      Running             minikube-ingress-dns      0                   33ea5a9962762       kube-ingress-dns-minikube
	910e07bc5455e       6e38f40d628db                                                                                                                16 minutes ago      Exited              storage-provisioner       0                   1360822b0ef5c       storage-provisioner
	c491d217e6a98       52546a367cc9e                                                                                                                16 minutes ago      Running             coredns                   0                   5f95340f57eb0       coredns-66bc5c9577-5qvcx
	7f229dd48a2c3       df0860106674d                                                                                                                16 minutes ago      Running             kube-proxy                0                   6014edb3b0f91       kube-proxy-kxjqx
	cc27cd95c2f3e       90550c43ad2bc                                                                                                                16 minutes ago      Running             kube-apiserver            0                   2e57097224cc0       kube-apiserver-addons-529420
	ea3b2c765f2ae       a0af72f2ec6d6                                                                                                                16 minutes ago      Running             kube-controller-manager   0                   ce79a900c656a       kube-controller-manager-addons-529420
	7f38046322a2d       46169d968e920                                                                                                                16 minutes ago      Running             kube-scheduler            0                   74b0641932447       kube-scheduler-addons-529420
	3a3152518f29b       5f1f5298c888d                                                                                                                16 minutes ago      Running             etcd                      0                   099566320ee0a       etcd-addons-529420
	
	
	==> controller_ingress [53f450233c95] <==
	I0929 10:38:05.882452       7 leaderelection.go:257] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0929 10:38:05.882700       7 nginx.go:339] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0929 10:38:05.883222       7 controller.go:214] "Configuration changes detected, backend reload required"
	I0929 10:38:05.888729       7 leaderelection.go:271] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0929 10:38:05.888761       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-9cc49f96f-pxng2"
	I0929 10:38:05.892640       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-9cc49f96f-pxng2" node="addons-529420"
	I0929 10:38:05.926797       7 controller.go:228] "Backend successfully reloaded"
	I0929 10:38:05.926911       7 controller.go:240] "Initial sync, sleeping for 1 second"
	I0929 10:38:05.926976       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-9cc49f96f-pxng2", UID:"587b0efc-8a0d-48e9-8f76-f6c72ed2a5f2", APIVersion:"v1", ResourceVersion:"704", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I0929 10:38:05.993545       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-9cc49f96f-pxng2" node="addons-529420"
	W0929 10:45:37.492997       7 controller.go:1126] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0929 10:45:37.494693       7 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I0929 10:45:37.498137       7 store.go:443] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	I0929 10:45:37.498313       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"1675f37f-1842-4388-a104-89352b87f57a", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2589", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0929 10:45:40.520096       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I0929 10:45:40.520746       7 controller.go:214] "Configuration changes detected, backend reload required"
	I0929 10:45:40.560990       7 controller.go:228] "Backend successfully reloaded"
	I0929 10:45:40.561244       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-9cc49f96f-pxng2", UID:"587b0efc-8a0d-48e9-8f76-f6c72ed2a5f2", APIVersion:"v1", ResourceVersion:"704", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0929 10:45:43.853384       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I0929 10:46:05.896671       7 status.go:309] "updating Ingress status" namespace="default" ingress="nginx-ingress" currentValue=null newValue=[{"ip":"192.168.49.2"}]
	I0929 10:46:05.900481       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"1675f37f-1842-4388-a104-89352b87f57a", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2672", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0929 10:46:05.900661       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W0929 10:47:24.477409       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W0929 10:51:33.751444       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W0929 10:51:37.085472       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	
	
	==> coredns [c491d217e6a9] <==
	[INFO] 10.244.0.7:50415 - 40142 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000140179s
	[INFO] 10.244.0.7:51631 - 16547 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000077812s
	[INFO] 10.244.0.7:51631 - 16342 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000121622s
	[INFO] 10.244.0.7:54120 - 55745 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000074894s
	[INFO] 10.244.0.7:54120 - 55427 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.00012144s
	[INFO] 10.244.0.7:44276 - 57260 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000156752s
	[INFO] 10.244.0.7:44276 - 57057 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00019348s
	[INFO] 10.244.0.25:60127 - 41853 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000357536s
	[INFO] 10.244.0.25:49489 - 60455 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000420887s
	[INFO] 10.244.0.25:56444 - 64863 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000123075s
	[INFO] 10.244.0.25:47991 - 45219 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000219459s
	[INFO] 10.244.0.25:41073 - 37798 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000132747s
	[INFO] 10.244.0.25:56023 - 28022 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00018111s
	[INFO] 10.244.0.25:34207 - 60772 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003985247s
	[INFO] 10.244.0.25:34528 - 32480 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.00679976s
	[INFO] 10.244.0.25:57103 - 8380 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.006696106s
	[INFO] 10.244.0.25:58770 - 56546 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.011895734s
	[INFO] 10.244.0.25:46155 - 33827 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004624849s
	[INFO] 10.244.0.25:33222 - 31647 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.008860505s
	[INFO] 10.244.0.25:50865 - 49473 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.0056946s
	[INFO] 10.244.0.25:47833 - 61181 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006586754s
	[INFO] 10.244.0.25:57597 - 20971 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 344 0.000971648s
	[INFO] 10.244.0.25:51886 - 57600 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00104711s
	[INFO] 10.244.0.28:43176 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000337648s
	[INFO] 10.244.0.28:43537 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000209032s
	
	
	==> describe nodes <==
	Name:               addons-529420
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-529420
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c1f958e1d15faaa2b94ae7399d1155627e45fcf8
	                    minikube.k8s.io/name=addons-529420
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T10_36_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-529420
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 10:36:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-529420
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 10:53:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 10:49:02 +0000   Mon, 29 Sep 2025 10:36:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 10:49:02 +0000   Mon, 29 Sep 2025 10:36:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 10:49:02 +0000   Mon, 29 Sep 2025 10:36:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 10:49:02 +0000   Mon, 29 Sep 2025 10:36:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-529420
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 3a82221623174cb89aca2e3330c89f2e
	  System UUID:                a576baa7-93e3-43ba-ad25-9862293a3344
	  Boot ID:                    2fed06dd-7ca1-49fc-8cb4-296c71642eea
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m47s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m2s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m10s
	  gadget                      gadget-8bvtc                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-pxng2    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         16m
	  kube-system                 coredns-66bc5c9577-5qvcx                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     16m
	  kube-system                 etcd-addons-529420                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         16m
	  kube-system                 kube-apiserver-addons-529420                250m (3%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-addons-529420       200m (2%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-kxjqx                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-addons-529420                100m (1%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             260Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node addons-529420 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node addons-529420 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node addons-529420 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node addons-529420 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node addons-529420 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node addons-529420 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m                node-controller  Node addons-529420 event: Registered Node addons-529420 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 60 55 0a b3 db 08 06
	[  +0.024494] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 22 ec 8f 40 19 66 08 06
	[  +2.802739] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 72 ba 15 15 c7 ec 08 06
	[  +0.014450] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 9a 38 c6 be 73 e8 08 06
	[  +2.936645] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 02 df 88 08 78 89 08 06
	[  +0.326891] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 b3 ea 3d 15 d3 08 06
	[  +0.292071] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 79 3e 6e 1c 85 08 06
	[Sep29 10:38] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 fd 85 4d 34 7c 08 06
	[  +6.578912] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0a 8d 25 f9 d3 f7 08 06
	[  +0.000514] IPv4: martian source 10.244.0.25 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 79 85 72 eb a1 08 06
	[Sep29 10:45] IPv4: martian source 10.244.0.1 from 10.244.0.28, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 20 b3 46 3c a7 08 06
	[  +0.000504] IPv4: martian source 10.244.0.28 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 79 85 72 eb a1 08 06
	[  +0.000784] IPv4: martian source 10.244.0.28 from 10.244.0.6, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 86 48 7b 6f c2 51 08 06
	
	
	==> etcd [3a3152518f29] <==
	{"level":"warn","ts":"2025-09-29T10:36:45.911144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:45.918112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:45.975495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:57.777500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36938","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T10:37:03.895377Z","caller":"traceutil/trace.go:172","msg":"trace[858034751] transaction","detail":"{read_only:false; response_revision:1015; number_of_response:1; }","duration":"128.948339ms","start":"2025-09-29T10:37:03.766408Z","end":"2025-09-29T10:37:03.895357Z","steps":["trace[858034751] 'process raft request'  (duration: 128.809516ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:37:04.039951Z","caller":"traceutil/trace.go:172","msg":"trace[799810606] transaction","detail":"{read_only:false; response_revision:1016; number_of_response:1; }","duration":"139.742546ms","start":"2025-09-29T10:37:03.900189Z","end":"2025-09-29T10:37:04.039932Z","steps":["trace[799810606] 'process raft request'  (duration: 139.624536ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T10:37:23.432233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.465158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.491685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.505493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.513523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.523591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.578347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.609684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.635915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.643629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.654366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.666394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55308","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T10:38:03.955142Z","caller":"traceutil/trace.go:172","msg":"trace[1495019545] transaction","detail":"{read_only:false; response_revision:1324; number_of_response:1; }","duration":"128.921298ms","start":"2025-09-29T10:38:03.826200Z","end":"2025-09-29T10:38:03.955121Z","steps":["trace[1495019545] 'process raft request'  (duration: 102.9549ms)","trace[1495019545] 'compare'  (duration: 25.853332ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-29T10:46:45.426042Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1847}
	{"level":"info","ts":"2025-09-29T10:46:45.453266Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1847,"took":"26.541323ms","hash":3270274898,"current-db-size-bytes":10637312,"current-db-size":"11 MB","current-db-size-in-use-bytes":6828032,"current-db-size-in-use":"6.8 MB"}
	{"level":"info","ts":"2025-09-29T10:46:45.453313Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3270274898,"revision":1847,"compact-revision":-1}
	{"level":"info","ts":"2025-09-29T10:51:45.431001Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":2759}
	{"level":"info","ts":"2025-09-29T10:51:45.451515Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":2759,"took":"19.777446ms","hash":2532647917,"current-db-size-bytes":10637312,"current-db-size":"11 MB","current-db-size-in-use-bytes":5406720,"current-db-size-in-use":"5.4 MB"}
	{"level":"info","ts":"2025-09-29T10:51:45.451577Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2532647917,"revision":2759,"compact-revision":1847}
	
	
	==> kernel <==
	 10:53:39 up  2:35,  0 users,  load average: 0.17, 0.33, 0.62
	Linux addons-529420 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [cc27cd95c2f3] <==
	I0929 10:46:46.423056       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 10:47:12.562468       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:47:52.427235       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:48:29.056133       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:49:06.267590       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:49:32.974902       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:50:24.249376       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:50:54.444621       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:51:32.829674       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 10:51:32.829733       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 10:51:32.845524       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 10:51:32.845579       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 10:51:32.850197       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 10:51:32.850322       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 10:51:32.859243       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 10:51:32.859297       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 10:51:32.874502       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 10:51:32.874542       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0929 10:51:33.851324       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0929 10:51:33.875278       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0929 10:51:33.988275       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0929 10:51:34.950150       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:52:03.453998       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:52:42.699293       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:53:06.873889       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [ea3b2c765f2a] <==
	E0929 10:53:05.734115       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:53:05.735298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:53:08.405042       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0929 10:53:09.908236       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:53:09.909327       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:53:14.124010       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:53:14.125155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:53:14.363205       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:53:14.364343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:53:17.241292       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:53:17.242446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:53:20.375400       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:53:20.376472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:53:21.425635       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:53:21.426744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:53:21.860079       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:53:21.861147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:53:22.698885       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:53:22.699877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:53:23.405697       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0929 10:53:24.628911       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:53:24.630086       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:53:36.598607       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:53:36.599641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:53:38.406548       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	
	
	==> kube-proxy [7f229dd48a2c] <==
	I0929 10:36:55.962965       1 server_linux.go:53] "Using iptables proxy"
	I0929 10:36:56.114754       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 10:36:56.216582       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 10:36:56.217225       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 10:36:56.218058       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 10:36:56.276653       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 10:36:56.277056       1 server_linux.go:132] "Using iptables Proxier"
	I0929 10:36:56.332280       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 10:36:56.332751       1 server.go:527] "Version info" version="v1.34.0"
	I0929 10:36:56.332790       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:36:56.334977       1 config.go:200] "Starting service config controller"
	I0929 10:36:56.335003       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 10:36:56.335633       1 config.go:309] "Starting node config controller"
	I0929 10:36:56.335645       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 10:36:56.335652       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 10:36:56.335877       1 config.go:106] "Starting endpoint slice config controller"
	I0929 10:36:56.335887       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 10:36:56.335924       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 10:36:56.335932       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 10:36:56.435153       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 10:36:56.436458       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 10:36:56.436501       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7f38046322a2] <==
	I0929 10:36:46.773756       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0929 10:36:46.774579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 10:36:46.774618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0929 10:36:46.774676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 10:36:46.775018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 10:36:46.775287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 10:36:46.775438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 10:36:46.775611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 10:36:46.775732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 10:36:46.775732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 10:36:46.775766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 10:36:46.776166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 10:36:46.776381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 10:36:46.776394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 10:36:46.776523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 10:36:46.776523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 10:36:46.776556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 10:36:46.776598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 10:36:46.776614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 10:36:46.776631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 10:36:47.607010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 10:36:47.607036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 10:36:47.611460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 10:36:47.673666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I0929 10:36:47.973483       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 10:51:35 addons-529420 kubelet[2313]: I0929 10:51:35.713112    2313 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 10:51:44 addons-529420 kubelet[2313]: E0929 10:51:44.715769    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="f831007e-4eb0-4f0f-b585-d846fca29f0a"
	Sep 29 10:51:45 addons-529420 kubelet[2313]: I0929 10:51:45.712848    2313 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-5qvcx" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 10:51:46 addons-529420 kubelet[2313]: E0929 10:51:46.713745    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="f53396f1-c1a5-46b0-8089-67b5c438ba7b"
	Sep 29 10:51:56 addons-529420 kubelet[2313]: E0929 10:51:56.715445    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="f831007e-4eb0-4f0f-b585-d846fca29f0a"
	Sep 29 10:52:00 addons-529420 kubelet[2313]: E0929 10:52:00.712973    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="f53396f1-c1a5-46b0-8089-67b5c438ba7b"
	Sep 29 10:52:09 addons-529420 kubelet[2313]: E0929 10:52:09.714608    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="f831007e-4eb0-4f0f-b585-d846fca29f0a"
	Sep 29 10:52:11 addons-529420 kubelet[2313]: W0929 10:52:11.830344    2313 logging.go:55] [core] [Channel #68 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "/var/lib/kubelet/plugins/csi-hostpath/csi.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/lib/kubelet/plugins/csi-hostpath/csi.sock: connect: connection refused"
	Sep 29 10:52:13 addons-529420 kubelet[2313]: E0929 10:52:13.713687    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="f53396f1-c1a5-46b0-8089-67b5c438ba7b"
	Sep 29 10:52:21 addons-529420 kubelet[2313]: E0929 10:52:21.715058    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="f831007e-4eb0-4f0f-b585-d846fca29f0a"
	Sep 29 10:52:25 addons-529420 kubelet[2313]: E0929 10:52:25.713490    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="f53396f1-c1a5-46b0-8089-67b5c438ba7b"
	Sep 29 10:52:32 addons-529420 kubelet[2313]: E0929 10:52:32.715804    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="f831007e-4eb0-4f0f-b585-d846fca29f0a"
	Sep 29 10:52:38 addons-529420 kubelet[2313]: E0929 10:52:38.716787    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="f53396f1-c1a5-46b0-8089-67b5c438ba7b"
	Sep 29 10:52:45 addons-529420 kubelet[2313]: E0929 10:52:45.715360    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="f831007e-4eb0-4f0f-b585-d846fca29f0a"
	Sep 29 10:52:50 addons-529420 kubelet[2313]: E0929 10:52:50.713574    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="f53396f1-c1a5-46b0-8089-67b5c438ba7b"
	Sep 29 10:52:53 addons-529420 kubelet[2313]: I0929 10:52:53.713458    2313 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 10:52:53 addons-529420 kubelet[2313]: I0929 10:52:53.713635    2313 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-5qvcx" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 10:52:58 addons-529420 kubelet[2313]: E0929 10:52:58.715999    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="f831007e-4eb0-4f0f-b585-d846fca29f0a"
	Sep 29 10:53:03 addons-529420 kubelet[2313]: E0929 10:53:03.713799    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="f53396f1-c1a5-46b0-8089-67b5c438ba7b"
	Sep 29 10:53:13 addons-529420 kubelet[2313]: E0929 10:53:13.715596    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="f831007e-4eb0-4f0f-b585-d846fca29f0a"
	Sep 29 10:53:15 addons-529420 kubelet[2313]: W0929 10:53:15.678063    2313 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "/var/lib/kubelet/plugins/csi-hostpath/csi.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/lib/kubelet/plugins/csi-hostpath/csi.sock: connect: connection refused"
	Sep 29 10:53:16 addons-529420 kubelet[2313]: E0929 10:53:16.713750    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="f53396f1-c1a5-46b0-8089-67b5c438ba7b"
	Sep 29 10:53:24 addons-529420 kubelet[2313]: E0929 10:53:24.715021    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="f831007e-4eb0-4f0f-b585-d846fca29f0a"
	Sep 29 10:53:30 addons-529420 kubelet[2313]: E0929 10:53:30.713071    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="f53396f1-c1a5-46b0-8089-67b5c438ba7b"
	Sep 29 10:53:35 addons-529420 kubelet[2313]: E0929 10:53:35.715050    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="f831007e-4eb0-4f0f-b585-d846fca29f0a"
	
	
	==> storage-provisioner [7292e19e493f] <==
	W0929 10:53:15.068066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:53:17.071786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:53:17.077035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:53:19.080259       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:53:19.085405       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:53:21.089447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:53:21.093790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:53:23.096779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:53:23.102043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:53:25.105004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:53:25.109467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:53:27.112643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:53:27.116657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:53:29.119936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:53:29.124119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:53:31.127610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:53:31.132219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:53:33.135066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:53:33.139722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:53:35.142845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:53:35.147893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:53:37.150997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:53:37.155061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:53:39.158132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:53:39.162669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [910e07bc5455] <==
	I0929 10:36:56.614357       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 10:37:26.626869       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-529420 -n addons-529420
helpers_test.go:269: (dbg) Run:  kubectl --context addons-529420 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-tj9td ingress-nginx-admission-patch-ncp6w
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-529420 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-tj9td ingress-nginx-admission-patch-ncp6w
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-529420 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-tj9td ingress-nginx-admission-patch-ncp6w: exit status 1 (86.202218ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-529420/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 10:45:37 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.31
	IPs:
	  IP:  10.244.0.31
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nbt6z (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-nbt6z:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  8m3s                   default-scheduler  Successfully assigned default/nginx to addons-529420
	  Normal   Pulling    5m8s (x5 over 8m2s)    kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     5m7s (x5 over 8m1s)    kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     5m7s (x5 over 8m1s)    kubelet            Error: ErrImagePull
	  Normal   BackOff    2m48s (x21 over 8m1s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2m48s (x21 over 8m1s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-529420/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 10:45:29 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xxqpd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-xxqpd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  8m11s                  default-scheduler  Successfully assigned default/task-pv-pod to addons-529420
	  Warning  Failed     8m9s                   kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    5m5s (x5 over 8m10s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     5m4s (x5 over 8m9s)    kubelet            Error: ErrImagePull
	  Warning  Failed     5m4s (x4 over 7m53s)   kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m3s (x20 over 8m8s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    2m51s (x21 over 8m8s)  kubelet            Back-off pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-29hwq (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-29hwq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-tj9td" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-ncp6w" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-529420 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-tj9td ingress-nginx-admission-patch-ncp6w: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-529420 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-529420 addons disable ingress-dns --alsologtostderr -v=1: (1.338172484s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-529420 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-529420 addons disable ingress --alsologtostderr -v=1: (7.641651896s)
--- FAIL: TestAddons/parallel/Ingress (491.84s)

                                                
                                    
x
+
TestAddons/parallel/CSI (389.19s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0929 10:45:10.555906  807725 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.345953ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-529420 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc hpvc -o jsonpath={.status.phase} -n default
2025/09/29 10:45:24 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-529420 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [f53396f1-c1a5-46b0-8089-67b5c438ba7b] Pending
helpers_test.go:352: "task-pv-pod" [f53396f1-c1a5-46b0-8089-67b5c438ba7b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:337: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:567: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:567: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-529420 -n addons-529420
addons_test.go:567: TestAddons/parallel/CSI: showing logs for failed pods as of 2025-09-29 10:51:30.212609778 +0000 UTC m=+944.048528597
addons_test.go:567: (dbg) Run:  kubectl --context addons-529420 describe po task-pv-pod -n default
addons_test.go:567: (dbg) kubectl --context addons-529420 describe po task-pv-pod -n default:
Name:             task-pv-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-529420/192.168.49.2
Start Time:       Mon, 29 Sep 2025 10:45:29 +0000
Labels:           app=task-pv-pod
Annotations:      <none>
Status:           Pending
IP:               10.244.0.29
IPs:
IP:  10.244.0.29
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP (http-server)
Host Port:      0/TCP (http-server)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xxqpd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc
ReadOnly:   false
kube-api-access-xxqpd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  6m1s                   default-scheduler  Successfully assigned default/task-pv-pod to addons-529420
Warning  Failed     5m59s                  kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    2m55s (x5 over 6m)     kubelet            Pulling image "docker.io/nginx"
Warning  Failed     2m54s (x5 over 5m59s)  kubelet            Error: ErrImagePull
Warning  Failed     2m54s (x4 over 5m43s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     53s (x20 over 5m58s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    41s (x21 over 5m58s)   kubelet            Back-off pulling image "docker.io/nginx"
addons_test.go:567: (dbg) Run:  kubectl --context addons-529420 logs task-pv-pod -n default
addons_test.go:567: (dbg) Non-zero exit: kubectl --context addons-529420 logs task-pv-pod -n default: exit status 1 (76.483592ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:567: kubectl --context addons-529420 logs task-pv-pod -n default: exit status 1
addons_test.go:568: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/CSI]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/CSI]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-529420
helpers_test.go:243: (dbg) docker inspect addons-529420:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c119ca8ea69036edc3cadb6923f83c5ce0ad278d98abd07121e74c6603626b70",
	        "Created": "2025-09-29T10:36:32.534326886Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 809726,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T10:36:32.572013527Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/c119ca8ea69036edc3cadb6923f83c5ce0ad278d98abd07121e74c6603626b70/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c119ca8ea69036edc3cadb6923f83c5ce0ad278d98abd07121e74c6603626b70/hostname",
	        "HostsPath": "/var/lib/docker/containers/c119ca8ea69036edc3cadb6923f83c5ce0ad278d98abd07121e74c6603626b70/hosts",
	        "LogPath": "/var/lib/docker/containers/c119ca8ea69036edc3cadb6923f83c5ce0ad278d98abd07121e74c6603626b70/c119ca8ea69036edc3cadb6923f83c5ce0ad278d98abd07121e74c6603626b70-json.log",
	        "Name": "/addons-529420",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-529420:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-529420",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c119ca8ea69036edc3cadb6923f83c5ce0ad278d98abd07121e74c6603626b70",
	                "LowerDir": "/var/lib/docker/overlay2/6a6aa4b945b92dc7997627af83b36f77536ae206c022f4b5596a3201308a5d21-init/diff:/var/lib/docker/overlay2/8785ead96546661033f04e32798b1ac8b184b1c04fc8862515b87d56909b6b9d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6a6aa4b945b92dc7997627af83b36f77536ae206c022f4b5596a3201308a5d21/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6a6aa4b945b92dc7997627af83b36f77536ae206c022f4b5596a3201308a5d21/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6a6aa4b945b92dc7997627af83b36f77536ae206c022f4b5596a3201308a5d21/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-529420",
	                "Source": "/var/lib/docker/volumes/addons-529420/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-529420",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-529420",
	                "name.minikube.sigs.k8s.io": "addons-529420",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "198f81b793e83c57ca2854cf8d1d0464141313e0495f23d0367891bbb8b05aa2",
	            "SandboxKey": "/var/run/docker/netns/198f81b793e8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33519"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33520"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33523"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33521"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33522"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-529420": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:86:fe:b5:4c:80",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3724749a501df880fce098abbef40c1fc5e7d4b0ca633d7b7b33237babda66c6",
	                    "EndpointID": "381be5dcde6efd6127536cd89de924bde7f3b532aa3b84038bdaf13a91eea4d0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-529420",
	                        "c119ca8ea690"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-529420 -n addons-529420
helpers_test.go:252: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-529420 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-529420 logs -n 25: (1.064070986s)
helpers_test.go:260: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                    ARGS                                                                                                                                                                                                                                    │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-239254                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-239254   │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │ 29 Sep 25 10:36 UTC │
	│ delete  │ -p download-only-079557                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-079557   │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │ 29 Sep 25 10:36 UTC │
	│ delete  │ -p download-only-239254                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-239254   │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │ 29 Sep 25 10:36 UTC │
	│ start   │ --download-only -p download-docker-537454 --alsologtostderr --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-537454 │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │                     │
	│ delete  │ -p download-docker-537454                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-docker-537454 │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │ 29 Sep 25 10:36 UTC │
	│ start   │ --download-only -p binary-mirror-606832 --alsologtostderr --binary-mirror http://127.0.0.1:41679 --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                               │ binary-mirror-606832   │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │                     │
	│ delete  │ -p binary-mirror-606832                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ binary-mirror-606832   │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │ 29 Sep 25 10:36 UTC │
	│ addons  │ enable dashboard -p addons-529420                                                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │                     │
	│ addons  │ disable dashboard -p addons-529420                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │                     │
	│ start   │ -p addons-529420 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │ 29 Sep 25 10:38 UTC │
	│ addons  │ addons-529420 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:44 UTC │ 29 Sep 25 10:44 UTC │
	│ addons  │ addons-529420 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ addons  │ enable headlamp -p addons-529420 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ addons  │ addons-529420 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ addons  │ addons-529420 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ addons  │ addons-529420 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ ip      │ addons-529420 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ addons  │ addons-529420 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ addons  │ addons-529420 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ addons  │ addons-529420 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ addons  │ addons-529420 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:47 UTC │ 29 Sep 25 10:47 UTC │
	│ addons  │ addons-529420 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:47 UTC │ 29 Sep 25 10:47 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-529420                                                                                                                                                                                                                                                                                                                                                                                             │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:47 UTC │ 29 Sep 25 10:47 UTC │
	│ addons  │ addons-529420 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:47 UTC │ 29 Sep 25 10:47 UTC │
	│ addons  │ addons-529420 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                            │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:50 UTC │ 29 Sep 25 10:51 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 10:36:12
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 10:36:12.344032  809094 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:36:12.344307  809094 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:36:12.344317  809094 out.go:374] Setting ErrFile to fd 2...
	I0929 10:36:12.344322  809094 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:36:12.344556  809094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-804126/.minikube/bin
	I0929 10:36:12.345145  809094 out.go:368] Setting JSON to false
	I0929 10:36:12.346064  809094 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":8313,"bootTime":1759133859,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:36:12.346178  809094 start.go:140] virtualization: kvm guest
	I0929 10:36:12.347964  809094 out.go:179] * [addons-529420] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 10:36:12.349153  809094 out.go:179]   - MINIKUBE_LOCATION=21656
	I0929 10:36:12.349158  809094 notify.go:220] Checking for updates...
	I0929 10:36:12.351412  809094 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:36:12.352567  809094 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21656-804126/kubeconfig
	I0929 10:36:12.353660  809094 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-804126/.minikube
	I0929 10:36:12.354776  809094 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 10:36:12.356142  809094 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 10:36:12.357739  809094 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:36:12.381211  809094 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 10:36:12.381322  809094 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:36:12.437520  809094 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-29 10:36:12.425260973 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:36:12.437676  809094 docker.go:318] overlay module found
	I0929 10:36:12.439238  809094 out.go:179] * Using the docker driver based on user configuration
	I0929 10:36:12.440251  809094 start.go:304] selected driver: docker
	I0929 10:36:12.440267  809094 start.go:924] validating driver "docker" against <nil>
	I0929 10:36:12.440280  809094 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 10:36:12.440926  809094 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:36:12.500513  809094 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-29 10:36:12.490511662 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:36:12.500726  809094 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 10:36:12.500930  809094 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 10:36:12.502519  809094 out.go:179] * Using Docker driver with root privileges
	I0929 10:36:12.503551  809094 cni.go:84] Creating CNI manager for ""
	I0929 10:36:12.503623  809094 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 10:36:12.503635  809094 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0929 10:36:12.503733  809094 start.go:348] cluster config:
	{Name:addons-529420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-529420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netwo
rkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1
m0s}
	I0929 10:36:12.504920  809094 out.go:179] * Starting "addons-529420" primary control-plane node in "addons-529420" cluster
	I0929 10:36:12.505901  809094 cache.go:123] Beginning downloading kic base image for docker with docker
	I0929 10:36:12.506980  809094 out.go:179] * Pulling base image v0.0.48 ...
	I0929 10:36:12.508072  809094 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 10:36:12.508109  809094 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 10:36:12.508127  809094 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21656-804126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0929 10:36:12.508138  809094 cache.go:58] Caching tarball of preloaded images
	I0929 10:36:12.508233  809094 preload.go:172] Found /home/jenkins/minikube-integration/21656-804126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0929 10:36:12.508247  809094 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0929 10:36:12.508686  809094 profile.go:143] Saving config to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/config.json ...
	I0929 10:36:12.508722  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/config.json: {Name:mkf2bdb1c1bab8c96c543722d15e10cff077f479 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:12.525703  809094 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 10:36:12.525838  809094 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0929 10:36:12.525859  809094 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0929 10:36:12.525863  809094 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0929 10:36:12.525870  809094 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0929 10:36:12.525876  809094 cache.go:165] Loading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from local cache
	I0929 10:36:25.658111  809094 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from cached tarball
	I0929 10:36:25.658161  809094 cache.go:232] Successfully downloaded all kic artifacts
	I0929 10:36:25.658213  809094 start.go:360] acquireMachinesLock for addons-529420: {Name:mkb1d52da565bdfba9b5d34f2b7f2b7e0a10b5f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 10:36:25.658335  809094 start.go:364] duration metric: took 99.814µs to acquireMachinesLock for "addons-529420"
	I0929 10:36:25.658362  809094 start.go:93] Provisioning new machine with config: &{Name:addons-529420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-529420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 10:36:25.658452  809094 start.go:125] createHost starting for "" (driver="docker")
	I0929 10:36:25.660487  809094 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0929 10:36:25.660770  809094 start.go:159] libmachine.API.Create for "addons-529420" (driver="docker")
	I0929 10:36:25.660815  809094 client.go:168] LocalClient.Create starting
	I0929 10:36:25.660956  809094 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca.pem
	I0929 10:36:25.875289  809094 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/cert.pem
	I0929 10:36:26.406689  809094 cli_runner.go:164] Run: docker network inspect addons-529420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0929 10:36:26.424628  809094 cli_runner.go:211] docker network inspect addons-529420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0929 10:36:26.424709  809094 network_create.go:284] running [docker network inspect addons-529420] to gather additional debugging logs...
	I0929 10:36:26.424730  809094 cli_runner.go:164] Run: docker network inspect addons-529420
	W0929 10:36:26.443146  809094 cli_runner.go:211] docker network inspect addons-529420 returned with exit code 1
	I0929 10:36:26.443184  809094 network_create.go:287] error running [docker network inspect addons-529420]: docker network inspect addons-529420: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-529420 not found
	I0929 10:36:26.443196  809094 network_create.go:289] output of [docker network inspect addons-529420]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-529420 not found
	
	** /stderr **
	I0929 10:36:26.443292  809094 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 10:36:26.462904  809094 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001bea4c0}
	I0929 10:36:26.462945  809094 network_create.go:124] attempt to create docker network addons-529420 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0929 10:36:26.462999  809094 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-529420 addons-529420
	I0929 10:36:26.525434  809094 network_create.go:108] docker network addons-529420 192.168.49.0/24 created
	I0929 10:36:26.525484  809094 kic.go:121] calculated static IP "192.168.49.2" for the "addons-529420" container
	I0929 10:36:26.525595  809094 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0929 10:36:26.543028  809094 cli_runner.go:164] Run: docker volume create addons-529420 --label name.minikube.sigs.k8s.io=addons-529420 --label created_by.minikube.sigs.k8s.io=true
	I0929 10:36:26.562375  809094 oci.go:103] Successfully created a docker volume addons-529420
	I0929 10:36:26.562495  809094 cli_runner.go:164] Run: docker run --rm --name addons-529420-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-529420 --entrypoint /usr/bin/test -v addons-529420:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0929 10:36:28.727983  809094 cli_runner.go:217] Completed: docker run --rm --name addons-529420-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-529420 --entrypoint /usr/bin/test -v addons-529420:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib: (2.165422344s)
	I0929 10:36:28.728035  809094 oci.go:107] Successfully prepared a docker volume addons-529420
	I0929 10:36:28.728077  809094 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 10:36:28.728109  809094 kic.go:194] Starting extracting preloaded images to volume ...
	I0929 10:36:28.728393  809094 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21656-804126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-529420:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0929 10:36:32.463008  809094 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21656-804126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-529420:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.734545448s)
	I0929 10:36:32.463043  809094 kic.go:203] duration metric: took 3.734930316s to extract preloaded images to volume ...
	W0929 10:36:32.463138  809094 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0929 10:36:32.463171  809094 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0929 10:36:32.463213  809094 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0929 10:36:32.517633  809094 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-529420 --name addons-529420 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-529420 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-529420 --network addons-529420 --ip 192.168.49.2 --volume addons-529420:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0929 10:36:32.782605  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Running}}
	I0929 10:36:32.802682  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:32.821845  809094 cli_runner.go:164] Run: docker exec addons-529420 stat /var/lib/dpkg/alternatives/iptables
	I0929 10:36:32.871577  809094 oci.go:144] the created container "addons-529420" has a running status.
	I0929 10:36:32.871612  809094 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa...
	I0929 10:36:33.119788  809094 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0929 10:36:33.151014  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:33.170919  809094 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0929 10:36:33.170942  809094 kic_runner.go:114] Args: [docker exec --privileged addons-529420 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0929 10:36:33.215532  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:33.236236  809094 machine.go:93] provisionDockerMachine start ...
	I0929 10:36:33.236361  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:33.258174  809094 main.go:141] libmachine: Using SSH client type: native
	I0929 10:36:33.258466  809094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33519 <nil> <nil>}
	I0929 10:36:33.258486  809094 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 10:36:33.399100  809094 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-529420
	
	I0929 10:36:33.399136  809094 ubuntu.go:182] provisioning hostname "addons-529420"
	I0929 10:36:33.399203  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:33.418829  809094 main.go:141] libmachine: Using SSH client type: native
	I0929 10:36:33.419101  809094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33519 <nil> <nil>}
	I0929 10:36:33.419124  809094 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-529420 && echo "addons-529420" | sudo tee /etc/hostname
	I0929 10:36:33.570695  809094 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-529420
	
	I0929 10:36:33.570813  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:33.590267  809094 main.go:141] libmachine: Using SSH client type: native
	I0929 10:36:33.590566  809094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33519 <nil> <nil>}
	I0929 10:36:33.590590  809094 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-529420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-529420/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-529420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 10:36:33.728831  809094 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 10:36:33.728862  809094 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21656-804126/.minikube CaCertPath:/home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21656-804126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21656-804126/.minikube}
	I0929 10:36:33.728905  809094 ubuntu.go:190] setting up certificates
	I0929 10:36:33.728918  809094 provision.go:84] configureAuth start
	I0929 10:36:33.728972  809094 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-529420
	I0929 10:36:33.748261  809094 provision.go:143] copyHostCerts
	I0929 10:36:33.748352  809094 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21656-804126/.minikube/ca.pem (1082 bytes)
	I0929 10:36:33.748507  809094 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21656-804126/.minikube/cert.pem (1123 bytes)
	I0929 10:36:33.748577  809094 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21656-804126/.minikube/key.pem (1679 bytes)
	I0929 10:36:33.748631  809094 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21656-804126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca-key.pem org=jenkins.addons-529420 san=[127.0.0.1 192.168.49.2 addons-529420 localhost minikube]
	I0929 10:36:33.869211  809094 provision.go:177] copyRemoteCerts
	I0929 10:36:33.869279  809094 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 10:36:33.869324  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:33.888791  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:33.988019  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 10:36:34.017500  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 10:36:34.045429  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 10:36:34.073342  809094 provision.go:87] duration metric: took 344.406928ms to configureAuth
	I0929 10:36:34.073378  809094 ubuntu.go:206] setting minikube options for container-runtime
	I0929 10:36:34.073606  809094 config.go:182] Loaded profile config "addons-529420": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 10:36:34.073667  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:34.093622  809094 main.go:141] libmachine: Using SSH client type: native
	I0929 10:36:34.093924  809094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33519 <nil> <nil>}
	I0929 10:36:34.093946  809094 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0929 10:36:34.233714  809094 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0929 10:36:34.233744  809094 ubuntu.go:71] root file system type: overlay
	I0929 10:36:34.233863  809094 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0929 10:36:34.233924  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:34.253864  809094 main.go:141] libmachine: Using SSH client type: native
	I0929 10:36:34.254082  809094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33519 <nil> <nil>}
	I0929 10:36:34.254160  809094 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0929 10:36:34.407267  809094 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0929 10:36:34.407373  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:34.426330  809094 main.go:141] libmachine: Using SSH client type: native
	I0929 10:36:34.426664  809094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33519 <nil> <nil>}
	I0929 10:36:34.426694  809094 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0929 10:36:35.600389  809094 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-29 10:36:34.405290761 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0929 10:36:35.600442  809094 machine.go:96] duration metric: took 2.364149235s to provisionDockerMachine
	I0929 10:36:35.600461  809094 client.go:171] duration metric: took 9.939636569s to LocalClient.Create
	I0929 10:36:35.600487  809094 start.go:167] duration metric: took 9.939718481s to libmachine.API.Create "addons-529420"
	I0929 10:36:35.600498  809094 start.go:293] postStartSetup for "addons-529420" (driver="docker")
	I0929 10:36:35.600507  809094 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 10:36:35.600574  809094 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 10:36:35.600613  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:35.620381  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:35.722159  809094 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 10:36:35.726032  809094 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 10:36:35.726071  809094 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 10:36:35.726079  809094 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 10:36:35.726086  809094 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 10:36:35.726099  809094 filesync.go:126] Scanning /home/jenkins/minikube-integration/21656-804126/.minikube/addons for local assets ...
	I0929 10:36:35.726162  809094 filesync.go:126] Scanning /home/jenkins/minikube-integration/21656-804126/.minikube/files for local assets ...
	I0929 10:36:35.726189  809094 start.go:296] duration metric: took 125.685891ms for postStartSetup
	I0929 10:36:35.726542  809094 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-529420
	I0929 10:36:35.746842  809094 profile.go:143] Saving config to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/config.json ...
	I0929 10:36:35.747139  809094 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 10:36:35.747190  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:35.766493  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:35.861001  809094 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 10:36:35.865987  809094 start.go:128] duration metric: took 10.207507252s to createHost
	I0929 10:36:35.866014  809094 start.go:83] releasing machines lock for "addons-529420", held for 10.207665879s
	I0929 10:36:35.866077  809094 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-529420
	I0929 10:36:35.884982  809094 ssh_runner.go:195] Run: cat /version.json
	I0929 10:36:35.885044  809094 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 10:36:35.885050  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:35.885102  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:35.904087  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:35.905861  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:36.079122  809094 ssh_runner.go:195] Run: systemctl --version
	I0929 10:36:36.084194  809094 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 10:36:36.089268  809094 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 10:36:36.123533  809094 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 10:36:36.123613  809094 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 10:36:36.154267  809094 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0929 10:36:36.154304  809094 start.go:495] detecting cgroup driver to use...
	I0929 10:36:36.154338  809094 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 10:36:36.154522  809094 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 10:36:36.173730  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 10:36:36.186991  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 10:36:36.198504  809094 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0929 10:36:36.198583  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0929 10:36:36.210007  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 10:36:36.221306  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 10:36:36.232714  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 10:36:36.244397  809094 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 10:36:36.255307  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 10:36:36.266876  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 10:36:36.278201  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 10:36:36.289714  809094 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 10:36:36.299414  809094 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 10:36:36.309736  809094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:36:36.379690  809094 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 10:36:36.458212  809094 start.go:495] detecting cgroup driver to use...
	I0929 10:36:36.458261  809094 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 10:36:36.458313  809094 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0929 10:36:36.471829  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 10:36:36.484463  809094 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 10:36:36.501567  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 10:36:36.514414  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 10:36:36.527777  809094 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 10:36:36.546772  809094 ssh_runner.go:195] Run: which cri-dockerd
	I0929 10:36:36.550960  809094 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0929 10:36:36.562724  809094 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0929 10:36:36.582736  809094 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0929 10:36:36.653862  809094 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0929 10:36:36.724043  809094 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0929 10:36:36.724204  809094 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0929 10:36:36.744794  809094 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 10:36:36.757465  809094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:36:36.826884  809094 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 10:36:37.610007  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 10:36:37.622898  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0929 10:36:37.636301  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 10:36:37.649738  809094 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0929 10:36:37.722666  809094 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0929 10:36:37.796505  809094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:36:37.867566  809094 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0929 10:36:37.892266  809094 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0929 10:36:37.904989  809094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:36:37.972029  809094 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0929 10:36:38.051455  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 10:36:38.065182  809094 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0929 10:36:38.065259  809094 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0929 10:36:38.069600  809094 start.go:563] Will wait 60s for crictl version
	I0929 10:36:38.069675  809094 ssh_runner.go:195] Run: which crictl
	I0929 10:36:38.073569  809094 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 10:36:38.110640  809094 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0929 10:36:38.110713  809094 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 10:36:38.140525  809094 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 10:36:38.168486  809094 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0929 10:36:38.168581  809094 cli_runner.go:164] Run: docker network inspect addons-529420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 10:36:38.186777  809094 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0929 10:36:38.191260  809094 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 10:36:38.203969  809094 kubeadm.go:875] updating cluster {Name:addons-529420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-529420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 10:36:38.204104  809094 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 10:36:38.204155  809094 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 10:36:38.227008  809094 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0929 10:36:38.227036  809094 docker.go:621] Images already preloaded, skipping extraction
	I0929 10:36:38.227106  809094 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 10:36:38.249601  809094 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0929 10:36:38.249632  809094 cache_images.go:85] Images are preloaded, skipping loading
	I0929 10:36:38.249646  809094 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0929 10:36:38.249774  809094 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-529420 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-529420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 10:36:38.249838  809094 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0929 10:36:38.304936  809094 cni.go:84] Creating CNI manager for ""
	I0929 10:36:38.304974  809094 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 10:36:38.305005  809094 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 10:36:38.305052  809094 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-529420 NodeName:addons-529420 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 10:36:38.305228  809094 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-529420"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 10:36:38.305304  809094 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 10:36:38.315730  809094 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 10:36:38.315813  809094 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 10:36:38.326318  809094 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0929 10:36:38.346540  809094 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 10:36:38.366500  809094 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I0929 10:36:38.386461  809094 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0929 10:36:38.390498  809094 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 10:36:38.403052  809094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:36:38.473834  809094 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 10:36:38.499163  809094 certs.go:68] Setting up /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420 for IP: 192.168.49.2
	I0929 10:36:38.499185  809094 certs.go:194] generating shared ca certs ...
	I0929 10:36:38.499202  809094 certs.go:226] acquiring lock for ca certs: {Name:mkc1a6b9be22ca2c6acbf4ae075c8649cd7bb029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:38.499334  809094 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21656-804126/.minikube/ca.key
	I0929 10:36:38.646613  809094 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-804126/.minikube/ca.crt ...
	I0929 10:36:38.646650  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/ca.crt: {Name:mk6af424fad818aae2ecb63097cb3cf4fa072fda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:38.646838  809094 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-804126/.minikube/ca.key ...
	I0929 10:36:38.646852  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/ca.key: {Name:mk82f55830669e727af6d6e5b82afa28223d781a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:38.646926  809094 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21656-804126/.minikube/proxy-client-ca.key
	I0929 10:36:38.721965  809094 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-804126/.minikube/proxy-client-ca.crt ...
	I0929 10:36:38.721996  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/proxy-client-ca.crt: {Name:mkfb52e92ed776fc971f1dc2b44df4c3ddc05e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:38.722184  809094 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-804126/.minikube/proxy-client-ca.key ...
	I0929 10:36:38.722197  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/proxy-client-ca.key: {Name:mk1e237f25fee0cc4ff615a96b95c3d6470c2cbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:38.722265  809094 certs.go:256] generating profile certs ...
	I0929 10:36:38.722327  809094 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.key
	I0929 10:36:38.722341  809094 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt with IP's: []
	I0929 10:36:39.380758  809094 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt ...
	I0929 10:36:39.380790  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt: {Name:mkc77ef14ddf2986f76245d42c12726cef377eb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:39.380976  809094 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.key ...
	I0929 10:36:39.380988  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.key: {Name:mke6b630fea371c2903c210193a2d24b7bb1f06f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:39.381077  809094 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.key.5ad07b27
	I0929 10:36:39.381099  809094 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.crt.5ad07b27 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0929 10:36:39.520731  809094 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.crt.5ad07b27 ...
	I0929 10:36:39.520767  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.crt.5ad07b27: {Name:mkcca52b9df73c7117f5aaacf4f623dd6c9d43c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:39.520941  809094 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.key.5ad07b27 ...
	I0929 10:36:39.520956  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.key.5ad07b27: {Name:mk26b5e71bc5d928280847a07112275ef7de7abe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:39.521026  809094 certs.go:381] copying /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.crt.5ad07b27 -> /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.crt
	I0929 10:36:39.521153  809094 certs.go:385] copying /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.key.5ad07b27 -> /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.key
	I0929 10:36:39.521216  809094 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.key
	I0929 10:36:39.521237  809094 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.crt with IP's: []
	I0929 10:36:39.588037  809094 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.crt ...
	I0929 10:36:39.588071  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.crt: {Name:mkf924850f5da9a86daf75692dc66662efd3fc9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:39.588249  809094 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.key ...
	I0929 10:36:39.588264  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.key: {Name:mkecb516723c573c2dd57316eed6cd1d6263ba08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:39.588475  809094 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 10:36:39.588513  809094 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca.pem (1082 bytes)
	I0929 10:36:39.588537  809094 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/cert.pem (1123 bytes)
	I0929 10:36:39.588557  809094 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/key.pem (1679 bytes)
	I0929 10:36:39.589229  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 10:36:39.617084  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0929 10:36:39.644094  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 10:36:39.671391  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 10:36:39.698770  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0929 10:36:39.726395  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 10:36:39.753719  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 10:36:39.781288  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 10:36:39.808573  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 10:36:39.838955  809094 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 10:36:39.859027  809094 ssh_runner.go:195] Run: openssl version
	I0929 10:36:39.865345  809094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 10:36:39.879241  809094 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:36:39.883476  809094 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:36:39.883550  809094 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:36:39.891125  809094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 10:36:39.902246  809094 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 10:36:39.906800  809094 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 10:36:39.906853  809094 kubeadm.go:392] StartCluster: {Name:addons-529420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-529420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketV
MnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:36:39.907037  809094 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0929 10:36:39.927225  809094 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 10:36:39.937579  809094 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 10:36:39.948182  809094 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0929 10:36:39.948260  809094 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 10:36:39.958474  809094 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 10:36:39.958499  809094 kubeadm.go:157] found existing configuration files:
	
	I0929 10:36:39.958547  809094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 10:36:39.968562  809094 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 10:36:39.968615  809094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 10:36:39.978358  809094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 10:36:39.988499  809094 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 10:36:39.988557  809094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 10:36:39.998306  809094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 10:36:40.008316  809094 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 10:36:40.008394  809094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 10:36:40.018692  809094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 10:36:40.028624  809094 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 10:36:40.028694  809094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 10:36:40.038485  809094 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0929 10:36:40.113436  809094 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1040-gcp\n", err: exit status 1
	I0929 10:36:40.172082  809094 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 10:36:49.487259  809094 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0929 10:36:49.487349  809094 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 10:36:49.487474  809094 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0929 10:36:49.487549  809094 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1040-gcp
	I0929 10:36:49.487599  809094 kubeadm.go:310] OS: Linux
	I0929 10:36:49.487674  809094 kubeadm.go:310] CGROUPS_CPU: enabled
	I0929 10:36:49.487738  809094 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0929 10:36:49.487808  809094 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0929 10:36:49.487877  809094 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0929 10:36:49.487945  809094 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0929 10:36:49.488009  809094 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0929 10:36:49.488084  809094 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0929 10:36:49.488155  809094 kubeadm.go:310] CGROUPS_IO: enabled
	I0929 10:36:49.488236  809094 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 10:36:49.488334  809094 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 10:36:49.488489  809094 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 10:36:49.488588  809094 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0929 10:36:49.490162  809094 out.go:252]   - Generating certificates and keys ...
	I0929 10:36:49.490252  809094 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 10:36:49.490322  809094 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 10:36:49.490390  809094 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 10:36:49.490463  809094 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 10:36:49.490546  809094 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 10:36:49.490627  809094 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 10:36:49.490693  809094 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 10:36:49.490785  809094 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-529420 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 10:36:49.490833  809094 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 10:36:49.490923  809094 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-529420 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 10:36:49.490993  809094 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 10:36:49.491078  809094 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 10:36:49.491125  809094 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 10:36:49.491174  809094 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 10:36:49.491224  809094 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 10:36:49.491279  809094 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 10:36:49.491356  809094 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 10:36:49.491494  809094 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 10:36:49.491545  809094 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 10:36:49.491621  809094 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 10:36:49.491692  809094 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 10:36:49.493671  809094 out.go:252]   - Booting up control plane ...
	I0929 10:36:49.493764  809094 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 10:36:49.493839  809094 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 10:36:49.493913  809094 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 10:36:49.494005  809094 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 10:36:49.494093  809094 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 10:36:49.494209  809094 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 10:36:49.494311  809094 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 10:36:49.494360  809094 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 10:36:49.494507  809094 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 10:36:49.494615  809094 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0929 10:36:49.494666  809094 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001039286s
	I0929 10:36:49.494753  809094 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 10:36:49.494843  809094 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0929 10:36:49.494970  809094 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 10:36:49.495066  809094 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 10:36:49.495136  809094 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.704142157s
	I0929 10:36:49.495206  809094 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.204406716s
	I0929 10:36:49.495290  809094 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 3.502382535s
	I0929 10:36:49.495410  809094 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 10:36:49.495564  809094 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 10:36:49.495618  809094 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 10:36:49.495799  809094 kubeadm.go:310] [mark-control-plane] Marking the node addons-529420 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 10:36:49.495870  809094 kubeadm.go:310] [bootstrap-token] Using token: yuv8vz.ch07v7kt1fq2e5wb
	I0929 10:36:49.498139  809094 out.go:252]   - Configuring RBAC rules ...
	I0929 10:36:49.498247  809094 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 10:36:49.498343  809094 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 10:36:49.498483  809094 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 10:36:49.498597  809094 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 10:36:49.498726  809094 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 10:36:49.498844  809094 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 10:36:49.498986  809094 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 10:36:49.499027  809094 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 10:36:49.499068  809094 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 10:36:49.499073  809094 kubeadm.go:310] 
	I0929 10:36:49.499124  809094 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 10:36:49.499129  809094 kubeadm.go:310] 
	I0929 10:36:49.499199  809094 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 10:36:49.499205  809094 kubeadm.go:310] 
	I0929 10:36:49.499230  809094 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 10:36:49.499295  809094 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 10:36:49.499340  809094 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 10:36:49.499345  809094 kubeadm.go:310] 
	I0929 10:36:49.499394  809094 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 10:36:49.499400  809094 kubeadm.go:310] 
	I0929 10:36:49.499496  809094 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 10:36:49.499503  809094 kubeadm.go:310] 
	I0929 10:36:49.499547  809094 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 10:36:49.499632  809094 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 10:36:49.499694  809094 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 10:36:49.499700  809094 kubeadm.go:310] 
	I0929 10:36:49.499768  809094 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 10:36:49.499834  809094 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 10:36:49.499840  809094 kubeadm.go:310] 
	I0929 10:36:49.499917  809094 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token yuv8vz.ch07v7kt1fq2e5wb \
	I0929 10:36:49.500007  809094 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:52ed2ed4898a1f91b8e8d0663d1b9110ba480fd365338f5c4892399f9ccfb123 \
	I0929 10:36:49.500027  809094 kubeadm.go:310] 	--control-plane 
	I0929 10:36:49.500033  809094 kubeadm.go:310] 
	I0929 10:36:49.500105  809094 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 10:36:49.500110  809094 kubeadm.go:310] 
	I0929 10:36:49.500183  809094 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token yuv8vz.ch07v7kt1fq2e5wb \
	I0929 10:36:49.500291  809094 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:52ed2ed4898a1f91b8e8d0663d1b9110ba480fd365338f5c4892399f9ccfb123 
	I0929 10:36:49.500305  809094 cni.go:84] Creating CNI manager for ""
	I0929 10:36:49.500322  809094 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 10:36:49.501607  809094 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0929 10:36:49.502541  809094 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0929 10:36:49.513883  809094 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0929 10:36:49.534107  809094 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 10:36:49.534178  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:49.534217  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-529420 minikube.k8s.io/updated_at=2025_09_29T10_36_49_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c1f958e1d15faaa2b94ae7399d1155627e45fcf8 minikube.k8s.io/name=addons-529420 minikube.k8s.io/primary=true
	I0929 10:36:49.634858  809094 ops.go:34] apiserver oom_adj: -16
	I0929 10:36:49.634879  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:50.135279  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:50.635199  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:51.135670  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:51.635820  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:52.135653  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:52.635900  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:53.135655  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:53.635560  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:53.702327  809094 kubeadm.go:1105] duration metric: took 4.168204298s to wait for elevateKubeSystemPrivileges
	I0929 10:36:53.702374  809094 kubeadm.go:394] duration metric: took 13.795525537s to StartCluster
	I0929 10:36:53.702401  809094 settings.go:142] acquiring lock: {Name:mkba08796c8ac17f1f94d6f1a6799aa6725155e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:53.702549  809094 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21656-804126/kubeconfig
	I0929 10:36:53.703073  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/kubeconfig: {Name:mkae1c4acd5fb2ec6cdb06d7e9593ac36dd4ccbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:53.703279  809094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 10:36:53.703300  809094 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 10:36:53.703359  809094 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0929 10:36:53.703546  809094 addons.go:69] Setting ingress=true in profile "addons-529420"
	I0929 10:36:53.703557  809094 addons.go:69] Setting metrics-server=true in profile "addons-529420"
	I0929 10:36:53.703575  809094 addons.go:69] Setting volcano=true in profile "addons-529420"
	I0929 10:36:53.703577  809094 addons.go:238] Setting addon ingress=true in "addons-529420"
	I0929 10:36:53.703583  809094 addons.go:238] Setting addon metrics-server=true in "addons-529420"
	I0929 10:36:53.703600  809094 addons.go:238] Setting addon volcano=true in "addons-529420"
	I0929 10:36:53.703611  809094 config.go:182] Loaded profile config "addons-529420": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 10:36:53.703623  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703615  809094 addons.go:69] Setting storage-provisioner=true in profile "addons-529420"
	I0929 10:36:53.703640  809094 addons.go:69] Setting inspektor-gadget=true in profile "addons-529420"
	I0929 10:36:53.703649  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703650  809094 addons.go:238] Setting addon inspektor-gadget=true in "addons-529420"
	I0929 10:36:53.703670  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703670  809094 addons.go:69] Setting default-storageclass=true in profile "addons-529420"
	I0929 10:36:53.703673  809094 addons.go:238] Setting addon storage-provisioner=true in "addons-529420"
	I0929 10:36:53.703685  809094 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-529420"
	I0929 10:36:53.703665  809094 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-529420"
	I0929 10:36:53.703742  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703752  809094 addons.go:69] Setting registry=true in profile "addons-529420"
	I0929 10:36:53.703784  809094 addons.go:238] Setting addon registry=true in "addons-529420"
	I0929 10:36:53.703797  809094 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-529420"
	I0929 10:36:53.703816  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703839  809094 addons.go:69] Setting registry-creds=true in profile "addons-529420"
	I0929 10:36:53.703853  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703864  809094 addons.go:238] Setting addon registry-creds=true in "addons-529420"
	I0929 10:36:53.703888  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.704091  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704229  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704236  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704241  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704269  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704280  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704369  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704383  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704822  809094 addons.go:69] Setting gcp-auth=true in profile "addons-529420"
	I0929 10:36:53.704853  809094 mustload.go:65] Loading cluster: addons-529420
	I0929 10:36:53.704946  809094 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-529420"
	I0929 10:36:53.704967  809094 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-529420"
	I0929 10:36:53.705010  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.705063  809094 config.go:182] Loaded profile config "addons-529420": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 10:36:53.705314  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.705514  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.705877  809094 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-529420"
	I0929 10:36:53.705901  809094 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-529420"
	I0929 10:36:53.705927  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.706142  809094 out.go:179] * Verifying Kubernetes components...
	I0929 10:36:53.706320  809094 addons.go:69] Setting volumesnapshots=true in profile "addons-529420"
	I0929 10:36:53.706369  809094 addons.go:238] Setting addon volumesnapshots=true in "addons-529420"
	I0929 10:36:53.706411  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703632  809094 addons.go:69] Setting ingress-dns=true in profile "addons-529420"
	I0929 10:36:53.706720  809094 addons.go:238] Setting addon ingress-dns=true in "addons-529420"
	I0929 10:36:53.706754  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.707040  809094 addons.go:69] Setting cloud-spanner=true in profile "addons-529420"
	I0929 10:36:53.707069  809094 addons.go:238] Setting addon cloud-spanner=true in "addons-529420"
	I0929 10:36:53.707096  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703568  809094 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-529420"
	I0929 10:36:53.707458  809094 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-529420"
	I0929 10:36:53.707552  809094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:36:53.703557  809094 addons.go:69] Setting yakd=true in profile "addons-529420"
	I0929 10:36:53.707960  809094 addons.go:238] Setting addon yakd=true in "addons-529420"
	I0929 10:36:53.707999  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703623  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.714143  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.714652  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.719013  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.719122  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.719970  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.722035  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.723786  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.778757  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.780491  809094 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0929 10:36:53.781884  809094 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0929 10:36:53.781909  809094 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0929 10:36:53.782011  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.783509  809094 addons.go:238] Setting addon default-storageclass=true in "addons-529420"
	I0929 10:36:53.783669  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.786763  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.807336  809094 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 10:36:53.807376  809094 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0929 10:36:53.807346  809094 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 10:36:53.807406  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0929 10:36:53.808907  809094 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0929 10:36:53.809965  809094 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-529420"
	I0929 10:36:53.810002  809094 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 10:36:53.810022  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 10:36:53.810026  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.810093  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.809085  809094 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0929 10:36:53.809123  809094 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0929 10:36:53.810358  809094 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0929 10:36:53.810445  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.810589  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.811657  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0929 10:36:53.811713  809094 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0929 10:36:53.811728  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0929 10:36:53.811729  809094 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 10:36:53.811785  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.812811  809094 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 10:36:53.812830  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0929 10:36:53.812883  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.813727  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0929 10:36:53.814898  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0929 10:36:53.816283  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0929 10:36:53.817528  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0929 10:36:53.818644  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0929 10:36:53.820451  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0929 10:36:53.820547  809094 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0929 10:36:53.820676  809094 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0929 10:36:53.821594  809094 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0929 10:36:53.821612  809094 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0929 10:36:53.821686  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.822262  809094 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 10:36:53.823314  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0929 10:36:53.822343  809094 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 10:36:53.823758  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0929 10:36:53.823888  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.824274  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.826866  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0929 10:36:53.828202  809094 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0929 10:36:53.828229  809094 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0929 10:36:53.828293  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.831468  809094 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0929 10:36:53.831558  809094 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0929 10:36:53.831581  809094 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I0929 10:36:53.831884  809094 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0929 10:36:53.835051  809094 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 10:36:53.835084  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0929 10:36:53.835153  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.836300  809094 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 10:36:53.836321  809094 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 10:36:53.836386  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.836883  809094 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I0929 10:36:53.837031  809094 out.go:179]   - Using image docker.io/registry:3.0.0
	I0929 10:36:53.838461  809094 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0929 10:36:53.838481  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0929 10:36:53.838563  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.839840  809094 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I0929 10:36:53.846729  809094 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0929 10:36:53.847497  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I0929 10:36:53.847543  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.847569  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.862719  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.868937  809094 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0929 10:36:53.870360  809094 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 10:36:53.870448  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0929 10:36:53.870559  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.876206  809094 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 10:36:53.876242  809094 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 10:36:53.879516  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.881669  809094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 10:36:53.897463  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.897986  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.902073  809094 out.go:179]   - Using image docker.io/busybox:stable
	I0929 10:36:53.907027  809094 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0929 10:36:53.908323  809094 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 10:36:53.908508  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0929 10:36:53.908601  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.912847  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.916620  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.916689  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.918495  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.924984  809094 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 10:36:53.927315  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.928318  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.929791  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.931522  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.931855  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.932316  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	W0929 10:36:53.933912  809094 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0929 10:36:53.933975  809094 retry.go:31] will retry after 222.37842ms: ssh: handshake failed: EOF
	I0929 10:36:53.947593  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.951956  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:54.017495  809094 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0929 10:36:54.017528  809094 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0929 10:36:54.037524  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0929 10:36:54.047797  809094 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0929 10:36:54.047824  809094 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0929 10:36:54.088850  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 10:36:54.102982  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 10:36:54.104728  809094 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0929 10:36:54.104759  809094 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0929 10:36:54.105824  809094 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:36:54.105847  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0929 10:36:54.113997  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 10:36:54.115206  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 10:36:54.115595  809094 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0929 10:36:54.115658  809094 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0929 10:36:54.115954  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 10:36:54.131289  809094 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0929 10:36:54.131324  809094 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0929 10:36:54.141402  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 10:36:54.141685  809094 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0929 10:36:54.141711  809094 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0929 10:36:54.146347  809094 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 10:36:54.146375  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0929 10:36:54.148830  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0929 10:36:54.158233  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 10:36:54.187390  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:36:54.199350  809094 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0929 10:36:54.199378  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0929 10:36:54.219151  809094 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0929 10:36:54.219235  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0929 10:36:54.228377  809094 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 10:36:54.228502  809094 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 10:36:54.236642  809094 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0929 10:36:54.236676  809094 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0929 10:36:54.249985  809094 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0929 10:36:54.250018  809094 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0929 10:36:54.257015  809094 node_ready.go:35] waiting up to 6m0s for node "addons-529420" to be "Ready" ...
	I0929 10:36:54.257465  809094 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0929 10:36:54.260181  809094 node_ready.go:49] node "addons-529420" is "Ready"
	I0929 10:36:54.260213  809094 node_ready.go:38] duration metric: took 3.156144ms for node "addons-529420" to be "Ready" ...
	I0929 10:36:54.260251  809094 api_server.go:52] waiting for apiserver process to appear ...
	I0929 10:36:54.260304  809094 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 10:36:54.275549  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0929 10:36:54.288410  809094 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 10:36:54.288527  809094 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 10:36:54.309547  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0929 10:36:54.326392  809094 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0929 10:36:54.326462  809094 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0929 10:36:54.371852  809094 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0929 10:36:54.371884  809094 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0929 10:36:54.385903  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 10:36:54.406122  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 10:36:54.441278  809094 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0929 10:36:54.441321  809094 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0929 10:36:54.475876  809094 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0929 10:36:54.475975  809094 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0929 10:36:54.558603  809094 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 10:36:54.558634  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0929 10:36:54.567254  809094 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0929 10:36:54.567363  809094 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0929 10:36:54.637129  809094 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0929 10:36:54.637160  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0929 10:36:54.681921  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 10:36:54.728318  809094 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0929 10:36:54.728360  809094 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0929 10:36:54.766660  809094 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-529420" context rescaled to 1 replicas
	I0929 10:36:54.848450  809094 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0929 10:36:54.848663  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0929 10:36:54.946361  809094 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0929 10:36:54.946523  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0929 10:36:55.079437  809094 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 10:36:55.079471  809094 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0929 10:36:55.190890  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 10:36:55.493984  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.390947237s)
	I0929 10:36:55.494004  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.379973258s)
	I0929 10:36:55.494048  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.378812825s)
	I0929 10:36:56.183759  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.042240387s)
	I0929 10:36:56.184086  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (2.068081762s)
	I0929 10:36:56.184109  809094 addons.go:479] Verifying addon ingress=true in "addons-529420"
	I0929 10:36:56.185647  809094 out.go:179] * Verifying ingress addon...
	I0929 10:36:56.187598  809094 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0929 10:36:56.207003  809094 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0929 10:36:56.207034  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:56.705051  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:57.193166  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:57.222996  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.064717663s)
	I0929 10:36:57.223366  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.035944416s)
	W0929 10:36:57.223400  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:36:57.223437  809094 retry.go:31] will retry after 152.765294ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:36:57.223474  809094 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.963156736s)
	I0929 10:36:57.223491  809094 api_server.go:72] duration metric: took 3.520164967s to wait for apiserver process to appear ...
	I0929 10:36:57.223498  809094 api_server.go:88] waiting for apiserver healthz status ...
	I0929 10:36:57.223518  809094 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0929 10:36:57.223916  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (2.948332459s)
	I0929 10:36:57.223942  809094 addons.go:479] Verifying addon registry=true in "addons-529420"
	I0929 10:36:57.224224  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (2.914632573s)
	I0929 10:36:57.224522  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.838576934s)
	I0929 10:36:57.224572  809094 addons.go:479] Verifying addon metrics-server=true in "addons-529420"
	I0929 10:36:57.224580  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.818313312s)
	I0929 10:36:57.224688  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.542731559s)
	W0929 10:36:57.224716  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 10:36:57.224732  809094 retry.go:31] will retry after 151.241621ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 10:36:57.224940  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.034013059s)
	I0929 10:36:57.224964  809094 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-529420"
	I0929 10:36:57.225804  809094 out.go:179] * Verifying registry addon...
	I0929 10:36:57.225829  809094 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-529420 service yakd-dashboard -n yakd-dashboard
	
	I0929 10:36:57.227287  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.078403021s)
	I0929 10:36:57.228280  809094 out.go:179] * Verifying csi-hostpath-driver addon...
	I0929 10:36:57.229531  809094 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0929 10:36:57.230385  809094 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0929 10:36:57.236158  809094 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0929 10:36:57.238077  809094 api_server.go:141] control plane version: v1.34.0
	I0929 10:36:57.238109  809094 api_server.go:131] duration metric: took 14.602902ms to wait for apiserver health ...
	I0929 10:36:57.238122  809094 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 10:36:57.295055  809094 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 10:36:57.295086  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:36:57.297613  809094 system_pods.go:59] 19 kube-system pods found
	I0929 10:36:57.297672  809094 system_pods.go:61] "amd-gpu-device-plugin-5pr2g" [d246c61e-5b06-4a8a-b834-a8104ed29092] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 10:36:57.297684  809094 system_pods.go:61] "coredns-66bc5c9577-5qvcx" [431a8d9d-251c-4916-9c89-87d909588d39] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:36:57.297700  809094 system_pods.go:61] "csi-hostpath-attacher-0" [70a20e1e-6207-4b51-afa5-e013bc35656f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 10:36:57.297708  809094 system_pods.go:61] "csi-hostpath-resizer-0" [9de479e4-bad8-4f15-bfd7-f7f8a2dd12e6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 10:36:57.297716  809094 system_pods.go:61] "csi-hostpathplugin-z7562" [bb4e1eda-850e-4da2-ae44-f84511d3ee67] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 10:36:57.297725  809094 system_pods.go:61] "etcd-addons-529420" [1b81bfdd-919b-4cd8-a60e-22f79112660f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 10:36:57.297730  809094 system_pods.go:61] "kube-apiserver-addons-529420" [47505dbb-1863-4367-98f4-9f7ebb2c9415] Running
	I0929 10:36:57.297746  809094 system_pods.go:61] "kube-controller-manager-addons-529420" [37496a5f-d392-4a23-b1a5-1c7a43d653a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 10:36:57.297755  809094 system_pods.go:61] "kube-ingress-dns-minikube" [a3dd66da-3ebf-43f5-80b9-e99fc50dfcb6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 10:36:57.297760  809094 system_pods.go:61] "kube-proxy-kxjqx" [99197e14-7d79-403b-87f4-b1f17f52f9f5] Running
	I0929 10:36:57.297767  809094 system_pods.go:61] "kube-scheduler-addons-529420" [ebe74bc8-ca56-4474-99cf-5b6633e8a102] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 10:36:57.297774  809094 system_pods.go:61] "metrics-server-85b7d694d7-6kvbb" [2261b64a-3e18-4fca-8fb4-4a870b5d302f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 10:36:57.297781  809094 system_pods.go:61] "nvidia-device-plugin-daemonset-rv5nz" [70340435-f458-4a2a-a617-185f04893887] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 10:36:57.297791  809094 system_pods.go:61] "registry-66898fdd98-9545s" [6b827e39-0a4c-4539-b408-9bfd1cdda6c1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 10:36:57.297799  809094 system_pods.go:61] "registry-creds-764b6fb674-ssj2n" [a5aecf48-b091-49c9-8743-a9d358c2fcc4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 10:36:57.297807  809094 system_pods.go:61] "registry-proxy-b2bpz" [87ba9cf0-8f09-4423-b674-f77637763bcc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 10:36:57.297814  809094 system_pods.go:61] "snapshot-controller-7d9fbc56b8-bxfh2" [bb19aa00-633c-4f38-88f2-3cd29546aec9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:36:57.297826  809094 system_pods.go:61] "snapshot-controller-7d9fbc56b8-pwx9p" [5203fa8d-6a4b-4bc9-89be-9a58d780e11e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:36:57.297834  809094 system_pods.go:61] "storage-provisioner" [c791ec78-6e64-4b97-a928-e3f1e19e92c1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 10:36:57.297843  809094 system_pods.go:74] duration metric: took 59.712387ms to wait for pod list to return data ...
	I0929 10:36:57.297854  809094 default_sa.go:34] waiting for default service account to be created ...
	I0929 10:36:57.298160  809094 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 10:36:57.298172  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:36:57.305651  809094 default_sa.go:45] found service account: "default"
	I0929 10:36:57.305752  809094 default_sa.go:55] duration metric: took 7.889394ms for default service account to be created ...
	I0929 10:36:57.305777  809094 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 10:36:57.314079  809094 system_pods.go:86] 19 kube-system pods found
	I0929 10:36:57.314192  809094 system_pods.go:89] "amd-gpu-device-plugin-5pr2g" [d246c61e-5b06-4a8a-b834-a8104ed29092] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 10:36:57.314243  809094 system_pods.go:89] "coredns-66bc5c9577-5qvcx" [431a8d9d-251c-4916-9c89-87d909588d39] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:36:57.314277  809094 system_pods.go:89] "csi-hostpath-attacher-0" [70a20e1e-6207-4b51-afa5-e013bc35656f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 10:36:57.314298  809094 system_pods.go:89] "csi-hostpath-resizer-0" [9de479e4-bad8-4f15-bfd7-f7f8a2dd12e6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 10:36:57.314318  809094 system_pods.go:89] "csi-hostpathplugin-z7562" [bb4e1eda-850e-4da2-ae44-f84511d3ee67] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 10:36:57.314350  809094 system_pods.go:89] "etcd-addons-529420" [1b81bfdd-919b-4cd8-a60e-22f79112660f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 10:36:57.314386  809094 system_pods.go:89] "kube-apiserver-addons-529420" [47505dbb-1863-4367-98f4-9f7ebb2c9415] Running
	I0929 10:36:57.314408  809094 system_pods.go:89] "kube-controller-manager-addons-529420" [37496a5f-d392-4a23-b1a5-1c7a43d653a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 10:36:57.314453  809094 system_pods.go:89] "kube-ingress-dns-minikube" [a3dd66da-3ebf-43f5-80b9-e99fc50dfcb6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 10:36:57.314472  809094 system_pods.go:89] "kube-proxy-kxjqx" [99197e14-7d79-403b-87f4-b1f17f52f9f5] Running
	I0929 10:36:57.314489  809094 system_pods.go:89] "kube-scheduler-addons-529420" [ebe74bc8-ca56-4474-99cf-5b6633e8a102] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 10:36:57.314521  809094 system_pods.go:89] "metrics-server-85b7d694d7-6kvbb" [2261b64a-3e18-4fca-8fb4-4a870b5d302f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 10:36:57.314541  809094 system_pods.go:89] "nvidia-device-plugin-daemonset-rv5nz" [70340435-f458-4a2a-a617-185f04893887] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 10:36:57.314558  809094 system_pods.go:89] "registry-66898fdd98-9545s" [6b827e39-0a4c-4539-b408-9bfd1cdda6c1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 10:36:57.314575  809094 system_pods.go:89] "registry-creds-764b6fb674-ssj2n" [a5aecf48-b091-49c9-8743-a9d358c2fcc4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 10:36:57.314623  809094 system_pods.go:89] "registry-proxy-b2bpz" [87ba9cf0-8f09-4423-b674-f77637763bcc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 10:36:57.314641  809094 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bxfh2" [bb19aa00-633c-4f38-88f2-3cd29546aec9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:36:57.314661  809094 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pwx9p" [5203fa8d-6a4b-4bc9-89be-9a58d780e11e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:36:57.314703  809094 system_pods.go:89] "storage-provisioner" [c791ec78-6e64-4b97-a928-e3f1e19e92c1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 10:36:57.314741  809094 system_pods.go:126] duration metric: took 8.926144ms to wait for k8s-apps to be running ...
	I0929 10:36:57.314762  809094 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 10:36:57.314849  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 10:36:57.336098  809094 system_svc.go:56] duration metric: took 21.32375ms WaitForService to wait for kubelet
	I0929 10:36:57.336133  809094 kubeadm.go:578] duration metric: took 3.632804576s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 10:36:57.336158  809094 node_conditions.go:102] verifying NodePressure condition ...
	I0929 10:36:57.344338  809094 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 10:36:57.344379  809094 node_conditions.go:123] node cpu capacity is 8
	I0929 10:36:57.344396  809094 node_conditions.go:105] duration metric: took 8.231792ms to run NodePressure ...
	I0929 10:36:57.344410  809094 start.go:241] waiting for startup goroutines ...
	I0929 10:36:57.376412  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 10:36:57.376539  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:36:57.692404  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:57.793184  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:36:57.793318  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:36:58.191147  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:58.233460  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:36:58.233602  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:36:58.692356  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:58.733694  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:36:58.733886  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:36:59.194666  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:59.293196  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:36:59.293196  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:36:59.692198  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:59.792340  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:36:59.792540  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:00.126856  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.750359151s)
	I0929 10:37:00.126994  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.750412046s)
	W0929 10:37:00.127048  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:00.127077  809094 retry.go:31] will retry after 243.584374ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:00.191955  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:00.292723  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:00.292816  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:00.371587  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:00.692541  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:00.733843  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:00.734315  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:01.147379  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:01.147434  809094 retry.go:31] will retry after 313.436894ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:01.192318  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:01.194800  809094 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0929 10:37:01.194881  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:37:01.217609  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:37:01.269785  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:01.269897  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:01.337246  809094 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0929 10:37:01.363074  809094 addons.go:238] Setting addon gcp-auth=true in "addons-529420"
	I0929 10:37:01.363143  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:37:01.363713  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:37:01.386898  809094 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0929 10:37:01.386956  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:37:01.409699  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:37:01.461472  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:01.691916  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:01.733454  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:01.734086  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:02.191808  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 10:37:02.208498  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:02.208536  809094 retry.go:31] will retry after 1.150273014s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:02.211872  809094 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 10:37:02.213574  809094 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0929 10:37:02.214876  809094 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0929 10:37:02.214902  809094 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0929 10:37:02.234020  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:02.234162  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:02.241298  809094 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0929 10:37:02.241331  809094 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0929 10:37:02.266924  809094 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 10:37:02.266946  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0929 10:37:02.293874  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 10:37:02.684619  809094 addons.go:479] Verifying addon gcp-auth=true in "addons-529420"
	I0929 10:37:02.685986  809094 out.go:179] * Verifying gcp-auth addon...
	I0929 10:37:02.688185  809094 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0929 10:37:02.693394  809094 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0929 10:37:02.693428  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:02.693405  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:02.733763  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:02.733885  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:03.192364  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:03.192855  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:03.234316  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:03.234465  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:03.359547  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:03.692021  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:03.692117  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:03.733209  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:03.734038  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:04.191279  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:04.191358  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:04.233692  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:04.234492  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:04.287957  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:04.287997  809094 retry.go:31] will retry after 1.511178059s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:04.692167  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:04.692234  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:04.733131  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:04.734062  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:05.191715  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:05.191921  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:05.233860  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:05.234129  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:05.691361  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:05.691521  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:05.734135  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:05.734258  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:05.800129  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:06.191447  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:06.191552  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:06.233874  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:06.234076  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:06.411195  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:06.411250  809094 retry.go:31] will retry after 2.100064857s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:06.691798  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:06.692073  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:06.732827  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:06.733809  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:07.191508  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:07.191709  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:07.292363  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:07.292457  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:07.691546  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:07.691574  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:07.734030  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:07.734125  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:08.191781  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:08.191814  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:08.232967  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:08.233754  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:08.512112  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:08.691009  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:08.691510  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:08.733861  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:08.733938  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:09.104861  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:09.104897  809094 retry.go:31] will retry after 3.376157006s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:09.191367  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:09.191595  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:09.233494  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:09.233556  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:09.691727  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:09.691798  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:09.733640  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:09.733721  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:10.190922  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:10.191004  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:10.233082  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:10.234008  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:10.691979  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:10.692058  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:10.733220  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:10.733906  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:11.191409  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:11.191511  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:11.234068  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:11.234155  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:11.691573  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:11.691635  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:11.732866  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:11.733594  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:12.191979  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:12.192191  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:12.232888  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:12.233877  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:12.481250  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:12.691033  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:12.691374  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:12.733300  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:12.733527  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:13.054889  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:13.054935  809094 retry.go:31] will retry after 5.523481945s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:13.191285  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:13.191341  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:13.233717  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:13.233802  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:13.691877  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:13.691923  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:13.732931  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:13.733745  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:14.191616  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:14.191752  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:14.233768  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:14.233879  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:14.691856  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:14.691909  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:14.733917  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:14.733984  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:15.191840  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:15.191956  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:15.232857  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:15.233963  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:15.691580  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:15.691731  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:15.733913  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:15.735663  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:16.191071  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:16.191303  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:16.233820  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:16.233918  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:16.691314  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:16.691386  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:16.734056  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:16.734170  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:17.191402  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:17.191593  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:17.233380  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:17.233625  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:17.691493  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:17.691699  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:17.732648  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:17.733372  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:18.191911  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:18.192046  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:18.233634  809094 kapi.go:107] duration metric: took 21.004035787s to wait for kubernetes.io/minikube-addons=registry ...
	I0929 10:37:18.234214  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:18.578952  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:18.692732  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:18.693005  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:18.734571  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:19.192457  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:19.192698  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:19.236192  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:19.359657  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:19.359698  809094 retry.go:31] will retry after 5.374303415s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:19.691689  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:19.691870  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:19.734388  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:20.192362  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:20.192436  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:20.235035  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:20.691493  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:20.691559  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:20.734068  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:21.191686  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:21.191754  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:21.234043  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:21.691673  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:21.691816  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:21.734360  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:22.192209  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:22.192364  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:22.234765  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:22.691170  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:22.691296  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:22.734585  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:23.191838  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:23.191855  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:23.234192  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:23.691617  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:23.691827  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:23.734342  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:24.191851  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:24.191914  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:24.234052  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:24.691665  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:24.691766  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:24.734744  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:24.778008  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:25.191224  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:25.191245  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:25.233932  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:25.323222  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:25.323263  809094 retry.go:31] will retry after 4.946418269s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:25.692211  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:25.692261  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:25.793190  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:26.191796  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:26.192044  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:26.234744  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:26.691669  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:26.691730  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:26.792265  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:27.191948  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:27.192054  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:27.293582  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:27.691274  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:27.691324  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:27.734583  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:28.191896  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:28.192035  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:28.234579  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:28.691393  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:28.691468  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:28.745258  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:29.191857  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:29.192145  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:29.293661  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:29.692369  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:29.692409  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:29.734847  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:30.191284  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:30.191549  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:30.235247  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:30.270015  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:30.692450  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:30.692468  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:30.734382  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:30.874687  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:30.874717  809094 retry.go:31] will retry after 9.611404012s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:31.191034  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:31.191136  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:31.234596  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:31.691789  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:31.691851  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:31.734487  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:32.192228  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:32.192397  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:32.234895  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:32.691647  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:32.691826  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:32.734231  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:33.191791  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:33.191803  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:33.293681  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:33.691499  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:33.691575  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:33.733652  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:34.191347  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:34.191394  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:34.235018  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:34.692239  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:34.692288  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:34.734356  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:35.191945  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:35.192042  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:35.233802  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:35.691543  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:35.691708  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:35.733748  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:36.190728  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:36.190927  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:36.234247  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:36.691843  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:36.691958  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:36.734324  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:37.232504  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:37.232580  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:37.234832  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:37.691206  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:37.691326  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:37.734619  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:38.191595  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:38.191680  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:38.292586  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:38.692258  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:38.692407  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:38.734713  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:39.191465  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:39.191541  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:39.233459  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:39.692264  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:39.692361  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:39.736727  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:40.191041  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:40.191241  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:40.282297  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:40.487332  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:40.700928  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:40.700982  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:40.734342  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:41.111058  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:41.111096  809094 retry.go:31] will retry after 27.798318096s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:41.192192  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:41.192448  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:41.234734  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:41.691467  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:41.691559  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:41.738052  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:42.191281  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:42.191334  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:42.234363  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:42.692206  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:42.692364  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:42.735143  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:43.192059  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:43.192131  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:43.234581  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:43.691539  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:43.691592  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:43.734168  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:44.192399  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:44.192554  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:44.235151  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:44.712201  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:44.712256  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:44.814275  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:45.191818  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:45.193367  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:45.292640  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:45.692318  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:45.692523  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:45.734670  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:46.191229  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:46.191310  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:46.234266  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:46.691380  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:46.691528  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:46.735585  809094 kapi.go:107] duration metric: took 49.505194902s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0929 10:37:47.191025  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:47.191255  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:47.691635  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:47.691700  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:48.191562  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:48.191703  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:48.692227  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:48.692295  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:49.191449  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:49.191458  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:49.691391  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:49.691392  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:50.191310  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:50.191376  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:50.691312  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:50.691530  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:51.190897  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:51.191071  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:51.690962  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:51.690975  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:52.191596  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:52.191799  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:52.691821  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:52.691860  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:53.192013  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:53.192203  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:53.690815  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:53.691412  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:54.191531  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:54.191563  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:54.690735  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:54.691192  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:55.191955  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:55.192190  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:55.691501  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:55.691567  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:56.191135  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:56.191259  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:56.691523  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:56.691569  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:57.190892  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:57.191196  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:57.691389  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:57.691460  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:58.191486  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:58.191637  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:58.691468  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:58.691615  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:59.191501  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:59.191524  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:59.691453  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:59.691624  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:00.191269  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:00.191495  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:00.691597  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:00.691645  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:01.192840  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:01.193251  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:01.692293  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:01.692359  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:02.191486  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:02.191506  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:02.691685  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:02.691815  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:03.191877  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:03.191952  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:03.692663  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:03.692768  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:04.191650  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:04.191736  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:04.691141  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:04.691354  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:05.191572  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:05.191608  809094 kapi.go:107] duration metric: took 1m9.004009852s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0929 10:38:05.692502  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:06.192606  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:06.692131  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:07.191846  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:07.691923  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:08.192050  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:08.691556  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:08.910309  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:38:09.192274  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 10:38:09.505831  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:38:09.505869  809094 retry.go:31] will retry after 26.574045109s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:38:09.692484  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:10.191984  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:10.692204  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:11.191688  809094 kapi.go:107] duration metric: took 1m8.503599106s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0929 10:38:11.193489  809094 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-529420 cluster.
	I0929 10:38:11.194659  809094 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0929 10:38:11.195892  809094 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0929 10:38:36.081555  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0929 10:38:36.665106  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 10:38:36.665238  809094 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0929 10:38:36.667274  809094 out.go:179] * Enabled addons: amd-gpu-device-plugin, cloud-spanner, storage-provisioner, nvidia-device-plugin, default-storageclass, registry-creds, metrics-server, ingress-dns, volcano, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0929 10:38:36.668228  809094 addons.go:514] duration metric: took 1m42.964873486s for enable addons: enabled=[amd-gpu-device-plugin cloud-spanner storage-provisioner nvidia-device-plugin default-storageclass registry-creds metrics-server ingress-dns volcano yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0929 10:38:36.668278  809094 start.go:246] waiting for cluster config update ...
	I0929 10:38:36.668307  809094 start.go:255] writing updated cluster config ...
	I0929 10:38:36.668641  809094 ssh_runner.go:195] Run: rm -f paused
	I0929 10:38:36.672803  809094 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 10:38:36.676997  809094 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5qvcx" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:36.681719  809094 pod_ready.go:94] pod "coredns-66bc5c9577-5qvcx" is "Ready"
	I0929 10:38:36.681743  809094 pod_ready.go:86] duration metric: took 4.720124ms for pod "coredns-66bc5c9577-5qvcx" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:36.684044  809094 pod_ready.go:83] waiting for pod "etcd-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:36.688395  809094 pod_ready.go:94] pod "etcd-addons-529420" is "Ready"
	I0929 10:38:36.688443  809094 pod_ready.go:86] duration metric: took 4.375753ms for pod "etcd-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:36.690812  809094 pod_ready.go:83] waiting for pod "kube-apiserver-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:36.694897  809094 pod_ready.go:94] pod "kube-apiserver-addons-529420" is "Ready"
	I0929 10:38:36.694938  809094 pod_ready.go:86] duration metric: took 4.098109ms for pod "kube-apiserver-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:36.696911  809094 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:37.077390  809094 pod_ready.go:94] pod "kube-controller-manager-addons-529420" is "Ready"
	I0929 10:38:37.077434  809094 pod_ready.go:86] duration metric: took 380.496118ms for pod "kube-controller-manager-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:37.276699  809094 pod_ready.go:83] waiting for pod "kube-proxy-kxjqx" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:37.677151  809094 pod_ready.go:94] pod "kube-proxy-kxjqx" is "Ready"
	I0929 10:38:37.677180  809094 pod_ready.go:86] duration metric: took 400.453557ms for pod "kube-proxy-kxjqx" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:37.877980  809094 pod_ready.go:83] waiting for pod "kube-scheduler-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:38.277511  809094 pod_ready.go:94] pod "kube-scheduler-addons-529420" is "Ready"
	I0929 10:38:38.277542  809094 pod_ready.go:86] duration metric: took 399.519909ms for pod "kube-scheduler-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:38.277554  809094 pod_ready.go:40] duration metric: took 1.604719848s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 10:38:38.326458  809094 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 10:38:38.327948  809094 out.go:179] * Done! kubectl is now configured to use "addons-529420" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 29 10:47:46 addons-529420 dockerd[1117]: time="2025-09-29T10:47:46.602200352Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 10:47:47 addons-529420 dockerd[1117]: time="2025-09-29T10:47:47.358254358Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:47:47 addons-529420 cri-dockerd[1422]: time="2025-09-29T10:47:47Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Pulling from library/busybox"
	Sep 29 10:48:02 addons-529420 dockerd[1117]: time="2025-09-29T10:48:02.952724746Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 10:48:03 addons-529420 dockerd[1117]: time="2025-09-29T10:48:03.429397762Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:48:26 addons-529420 dockerd[1117]: time="2025-09-29T10:48:26.953326476Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 10:48:27 addons-529420 dockerd[1117]: time="2025-09-29T10:48:27.432384059Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:48:33 addons-529420 dockerd[1117]: time="2025-09-29T10:48:33.698522133Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:48:36 addons-529420 dockerd[1117]: time="2025-09-29T10:48:36.701764564Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:49:07 addons-529420 dockerd[1117]: time="2025-09-29T10:49:07.954086181Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 10:49:08 addons-529420 dockerd[1117]: time="2025-09-29T10:49:08.436381569Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:49:46 addons-529420 dockerd[1117]: time="2025-09-29T10:49:46.444457226Z" level=info msg="ignoring event" container=620108435d4e5761a7acb4f1738f16f2bcbfd8e87440bfa8f34fab96ea08a68e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 10:50:16 addons-529420 cri-dockerd[1422]: time="2025-09-29T10:50:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3ace35db5b46186d22767674f314b728476988ef5b2a7e9cc6363dce26a49047/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local local europe-west2-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 29 10:50:17 addons-529420 dockerd[1117]: time="2025-09-29T10:50:17.071522409Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 10:50:17 addons-529420 dockerd[1117]: time="2025-09-29T10:50:17.832844388Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:50:17 addons-529420 cri-dockerd[1422]: time="2025-09-29T10:50:17Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Pulling from library/busybox"
	Sep 29 10:50:31 addons-529420 dockerd[1117]: time="2025-09-29T10:50:31.956765606Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 10:50:32 addons-529420 dockerd[1117]: time="2025-09-29T10:50:32.432063259Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:50:37 addons-529420 dockerd[1117]: time="2025-09-29T10:50:37.945034529Z" level=info msg="ignoring event" container=3ace35db5b46186d22767674f314b728476988ef5b2a7e9cc6363dce26a49047 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 10:51:02 addons-529420 dockerd[1117]: time="2025-09-29T10:51:02.849516165Z" level=info msg="Container failed to exit within 30s of signal 15 - using the force" container=8cc199db322c98fe2850df1cca2645c77ddd525dd3c1634cdf4f5ee82b4c4196
	Sep 29 10:51:02 addons-529420 dockerd[1117]: time="2025-09-29T10:51:02.876292000Z" level=info msg="ignoring event" container=8cc199db322c98fe2850df1cca2645c77ddd525dd3c1634cdf4f5ee82b4c4196 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 10:51:03 addons-529420 cri-dockerd[1422]: time="2025-09-29T10:51:03Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"local-path-provisioner-648f6765c9-fg6hp_local-path-storage\": unexpected command output nsenter: cannot open /proc/3757/ns/net: No such file or directory\n with error: exit status 1"
	Sep 29 10:51:03 addons-529420 dockerd[1117]: time="2025-09-29T10:51:03.030646683Z" level=info msg="ignoring event" container=aae77681f23ba441420baf6d103140dd2f85604ccfb2122e8208c6413965dce0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 10:51:16 addons-529420 dockerd[1117]: time="2025-09-29T10:51:16.729246195Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:51:18 addons-529420 dockerd[1117]: time="2025-09-29T10:51:18.699355155Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	c663ddd040c70       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          6 minutes ago       Running             busybox                                  0                   75cd2cc2ccb54       busybox
	53f450233c951       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef                             13 minutes ago      Running             controller                               0                   7e7e5a36e33c6       ingress-nginx-controller-9cc49f96f-pxng2
	b8bc330ded02a       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          13 minutes ago      Running             csi-snapshotter                          0                   ebcc1d0ccdc31       csi-hostpathplugin-z7562
	097ec25364497       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          13 minutes ago      Running             csi-provisioner                          0                   ebcc1d0ccdc31       csi-hostpathplugin-z7562
	42c80806f3442       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            13 minutes ago      Running             liveness-probe                           0                   ebcc1d0ccdc31       csi-hostpathplugin-z7562
	deb6678813fad       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           13 minutes ago      Running             hostpath                                 0                   ebcc1d0ccdc31       csi-hostpathplugin-z7562
	4c9a460366a7b       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                13 minutes ago      Running             node-driver-registrar                    0                   ebcc1d0ccdc31       csi-hostpathplugin-z7562
	8ea1f8a967cfc       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   13 minutes ago      Running             csi-external-health-monitor-controller   0                   ebcc1d0ccdc31       csi-hostpathplugin-z7562
	8c797f52e7973       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              13 minutes ago      Running             csi-resizer                              0                   f5028bed08a36       csi-hostpath-resizer-0
	7ed61e7756a57       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             13 minutes ago      Running             csi-attacher                             0                   80edbd2ea7f0f       csi-hostpath-attacher-0
	ac3df2986f815       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      13 minutes ago      Running             volume-snapshot-controller               0                   2f35c5222a676       snapshot-controller-7d9fbc56b8-pwx9p
	cb2fd8f092644       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      13 minutes ago      Running             volume-snapshot-controller               0                   ba080eb06cf32       snapshot-controller-7d9fbc56b8-bxfh2
	2575c5650e053       8c217da6734db                                                                                                                                13 minutes ago      Exited              patch                                    1                   8b7ddb7c14d4d       ingress-nginx-admission-patch-ncp6w
	788b28f41ae4a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   13 minutes ago      Exited              create                                   0                   fe7563f8d917a       ingress-nginx-admission-create-tj9td
	866b95aa367a6       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5                            13 minutes ago      Running             gadget                                   0                   7ee25e2189efc       gadget-8bvtc
	7292e19e493f5       6e38f40d628db                                                                                                                                14 minutes ago      Running             storage-provisioner                      1                   1360822b0ef5c       storage-provisioner
	88b9438083846       kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89                                         14 minutes ago      Running             minikube-ingress-dns                     0                   33ea5a9962762       kube-ingress-dns-minikube
	910e07bc5455e       6e38f40d628db                                                                                                                                14 minutes ago      Exited              storage-provisioner                      0                   1360822b0ef5c       storage-provisioner
	c491d217e6a98       52546a367cc9e                                                                                                                                14 minutes ago      Running             coredns                                  0                   5f95340f57eb0       coredns-66bc5c9577-5qvcx
	7f229dd48a2c3       df0860106674d                                                                                                                                14 minutes ago      Running             kube-proxy                               0                   6014edb3b0f91       kube-proxy-kxjqx
	cc27cd95c2f3e       90550c43ad2bc                                                                                                                                14 minutes ago      Running             kube-apiserver                           0                   2e57097224cc0       kube-apiserver-addons-529420
	ea3b2c765f2ae       a0af72f2ec6d6                                                                                                                                14 minutes ago      Running             kube-controller-manager                  0                   ce79a900c656a       kube-controller-manager-addons-529420
	7f38046322a2d       46169d968e920                                                                                                                                14 minutes ago      Running             kube-scheduler                           0                   74b0641932447       kube-scheduler-addons-529420
	3a3152518f29b       5f1f5298c888d                                                                                                                                14 minutes ago      Running             etcd                                     0                   099566320ee0a       etcd-addons-529420
	
	
	==> controller_ingress [53f450233c95] <==
	I0929 10:38:04.686885       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"7d8260c8-2dc2-42de-8475-3174f0fbcebe", APIVersion:"v1", ResourceVersion:"632", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0929 10:38:05.882258       7 nginx.go:319] "Starting NGINX process"
	I0929 10:38:05.882452       7 leaderelection.go:257] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0929 10:38:05.882700       7 nginx.go:339] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0929 10:38:05.883222       7 controller.go:214] "Configuration changes detected, backend reload required"
	I0929 10:38:05.888729       7 leaderelection.go:271] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0929 10:38:05.888761       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-9cc49f96f-pxng2"
	I0929 10:38:05.892640       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-9cc49f96f-pxng2" node="addons-529420"
	I0929 10:38:05.926797       7 controller.go:228] "Backend successfully reloaded"
	I0929 10:38:05.926911       7 controller.go:240] "Initial sync, sleeping for 1 second"
	I0929 10:38:05.926976       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-9cc49f96f-pxng2", UID:"587b0efc-8a0d-48e9-8f76-f6c72ed2a5f2", APIVersion:"v1", ResourceVersion:"704", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I0929 10:38:05.993545       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-9cc49f96f-pxng2" node="addons-529420"
	W0929 10:45:37.492997       7 controller.go:1126] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0929 10:45:37.494693       7 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I0929 10:45:37.498137       7 store.go:443] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	I0929 10:45:37.498313       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"1675f37f-1842-4388-a104-89352b87f57a", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2589", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0929 10:45:40.520096       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I0929 10:45:40.520746       7 controller.go:214] "Configuration changes detected, backend reload required"
	I0929 10:45:40.560990       7 controller.go:228] "Backend successfully reloaded"
	I0929 10:45:40.561244       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-9cc49f96f-pxng2", UID:"587b0efc-8a0d-48e9-8f76-f6c72ed2a5f2", APIVersion:"v1", ResourceVersion:"704", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0929 10:45:43.853384       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I0929 10:46:05.896671       7 status.go:309] "updating Ingress status" namespace="default" ingress="nginx-ingress" currentValue=null newValue=[{"ip":"192.168.49.2"}]
	I0929 10:46:05.900481       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"1675f37f-1842-4388-a104-89352b87f57a", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2672", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0929 10:46:05.900661       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W0929 10:47:24.477409       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	
	
	==> coredns [c491d217e6a9] <==
	[INFO] 10.244.0.7:50415 - 40142 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000140179s
	[INFO] 10.244.0.7:51631 - 16547 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000077812s
	[INFO] 10.244.0.7:51631 - 16342 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000121622s
	[INFO] 10.244.0.7:54120 - 55745 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000074894s
	[INFO] 10.244.0.7:54120 - 55427 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.00012144s
	[INFO] 10.244.0.7:44276 - 57260 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000156752s
	[INFO] 10.244.0.7:44276 - 57057 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00019348s
	[INFO] 10.244.0.25:60127 - 41853 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000357536s
	[INFO] 10.244.0.25:49489 - 60455 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000420887s
	[INFO] 10.244.0.25:56444 - 64863 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000123075s
	[INFO] 10.244.0.25:47991 - 45219 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000219459s
	[INFO] 10.244.0.25:41073 - 37798 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000132747s
	[INFO] 10.244.0.25:56023 - 28022 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00018111s
	[INFO] 10.244.0.25:34207 - 60772 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003985247s
	[INFO] 10.244.0.25:34528 - 32480 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.00679976s
	[INFO] 10.244.0.25:57103 - 8380 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.006696106s
	[INFO] 10.244.0.25:58770 - 56546 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.011895734s
	[INFO] 10.244.0.25:46155 - 33827 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004624849s
	[INFO] 10.244.0.25:33222 - 31647 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.008860505s
	[INFO] 10.244.0.25:50865 - 49473 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.0056946s
	[INFO] 10.244.0.25:47833 - 61181 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006586754s
	[INFO] 10.244.0.25:57597 - 20971 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 344 0.000971648s
	[INFO] 10.244.0.25:51886 - 57600 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00104711s
	[INFO] 10.244.0.28:43176 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000337648s
	[INFO] 10.244.0.28:43537 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000209032s
	
	
	==> describe nodes <==
	Name:               addons-529420
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-529420
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c1f958e1d15faaa2b94ae7399d1155627e45fcf8
	                    minikube.k8s.io/name=addons-529420
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T10_36_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-529420
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-529420"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 10:36:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-529420
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 10:51:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 10:49:02 +0000   Mon, 29 Sep 2025 10:36:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 10:49:02 +0000   Mon, 29 Sep 2025 10:36:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 10:49:02 +0000   Mon, 29 Sep 2025 10:36:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 10:49:02 +0000   Mon, 29 Sep 2025 10:36:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-529420
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 3a82221623174cb89aca2e3330c89f2e
	  System UUID:                a576baa7-93e3-43ba-ad25-9862293a3344
	  Boot ID:                    2fed06dd-7ca1-49fc-8cb4-296c71642eea
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m39s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  gadget                      gadget-8bvtc                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-pxng2    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         14m
	  kube-system                 coredns-66bc5c9577-5qvcx                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     14m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 csi-hostpathplugin-z7562                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 etcd-addons-529420                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         14m
	  kube-system                 kube-apiserver-addons-529420                250m (3%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-addons-529420       200m (2%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-kxjqx                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-addons-529420                100m (1%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 snapshot-controller-7d9fbc56b8-bxfh2        0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 snapshot-controller-7d9fbc56b8-pwx9p        0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             260Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node addons-529420 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node addons-529420 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node addons-529420 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node addons-529420 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node addons-529420 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node addons-529420 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node addons-529420 event: Registered Node addons-529420 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 60 55 0a b3 db 08 06
	[  +0.024494] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 22 ec 8f 40 19 66 08 06
	[  +2.802739] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 72 ba 15 15 c7 ec 08 06
	[  +0.014450] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 9a 38 c6 be 73 e8 08 06
	[  +2.936645] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 02 df 88 08 78 89 08 06
	[  +0.326891] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 b3 ea 3d 15 d3 08 06
	[  +0.292071] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 79 3e 6e 1c 85 08 06
	[Sep29 10:38] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 fd 85 4d 34 7c 08 06
	[  +6.578912] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0a 8d 25 f9 d3 f7 08 06
	[  +0.000514] IPv4: martian source 10.244.0.25 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 79 85 72 eb a1 08 06
	[Sep29 10:45] IPv4: martian source 10.244.0.1 from 10.244.0.28, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 20 b3 46 3c a7 08 06
	[  +0.000504] IPv4: martian source 10.244.0.28 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 79 85 72 eb a1 08 06
	[  +0.000784] IPv4: martian source 10.244.0.28 from 10.244.0.6, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 86 48 7b 6f c2 51 08 06
	
	
	==> etcd [3a3152518f29] <==
	{"level":"warn","ts":"2025-09-29T10:36:45.881854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:45.888653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:45.902856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:45.911144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:45.918112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:45.975495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:57.777500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36938","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T10:37:03.895377Z","caller":"traceutil/trace.go:172","msg":"trace[858034751] transaction","detail":"{read_only:false; response_revision:1015; number_of_response:1; }","duration":"128.948339ms","start":"2025-09-29T10:37:03.766408Z","end":"2025-09-29T10:37:03.895357Z","steps":["trace[858034751] 'process raft request'  (duration: 128.809516ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:37:04.039951Z","caller":"traceutil/trace.go:172","msg":"trace[799810606] transaction","detail":"{read_only:false; response_revision:1016; number_of_response:1; }","duration":"139.742546ms","start":"2025-09-29T10:37:03.900189Z","end":"2025-09-29T10:37:04.039932Z","steps":["trace[799810606] 'process raft request'  (duration: 139.624536ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T10:37:23.432233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.465158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.491685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.505493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.513523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.523591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.578347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.609684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.635915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.643629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.654366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.666394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55308","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T10:38:03.955142Z","caller":"traceutil/trace.go:172","msg":"trace[1495019545] transaction","detail":"{read_only:false; response_revision:1324; number_of_response:1; }","duration":"128.921298ms","start":"2025-09-29T10:38:03.826200Z","end":"2025-09-29T10:38:03.955121Z","steps":["trace[1495019545] 'process raft request'  (duration: 102.9549ms)","trace[1495019545] 'compare'  (duration: 25.853332ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-29T10:46:45.426042Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1847}
	{"level":"info","ts":"2025-09-29T10:46:45.453266Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1847,"took":"26.541323ms","hash":3270274898,"current-db-size-bytes":10637312,"current-db-size":"11 MB","current-db-size-in-use-bytes":6828032,"current-db-size-in-use":"6.8 MB"}
	{"level":"info","ts":"2025-09-29T10:46:45.453313Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3270274898,"revision":1847,"compact-revision":-1}
	
	
	==> kernel <==
	 10:51:31 up  2:33,  0 users,  load average: 0.29, 0.32, 0.65
	Linux addons-529420 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [cc27cd95c2f3] <==
	W0929 10:44:42.793473       1 cacher.go:182] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0929 10:44:42.800545       1 cacher.go:182] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0929 10:44:42.861870       1 cacher.go:182] Terminating all watchers from cacher hypernodes.topology.volcano.sh
	W0929 10:44:42.888433       1 cacher.go:182] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0929 10:44:42.939793       1 cacher.go:182] Terminating all watchers from cacher cronjobs.batch.volcano.sh
	W0929 10:44:43.206461       1 cacher.go:182] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	W0929 10:44:43.309112       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	I0929 10:44:45.905061       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0929 10:45:02.075817       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47796: use of closed network connection
	E0929 10:45:02.267143       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47826: use of closed network connection
	I0929 10:45:11.278756       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.35.49"}
	I0929 10:45:32.176305       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:45:37.495617       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0929 10:45:37.674939       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.237.222"}
	I0929 10:45:46.221514       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0929 10:46:01.566752       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:46:34.023075       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:46:46.423056       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 10:47:12.562468       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:47:52.427235       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:48:29.056133       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:49:06.267590       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:49:32.974902       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:50:24.249376       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:50:54.444621       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [ea3b2c765f2a] <==
	E0929 10:50:41.458832       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:50:41.459880       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:50:44.637449       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:50:44.638583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:50:52.116541       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:50:52.117733       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:50:53.322646       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:50:53.323744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:50:53.399239       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0929 10:51:03.266386       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:51:03.267494       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:51:06.429616       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:51:06.430753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:51:07.673865       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:51:07.674984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:51:08.400185       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0929 10:51:13.341276       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:51:13.342406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:51:18.573028       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:51:18.574246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:51:19.716586       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:51:19.717793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:51:23.400749       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0929 10:51:31.492187       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:51:31.493262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [7f229dd48a2c] <==
	I0929 10:36:55.962965       1 server_linux.go:53] "Using iptables proxy"
	I0929 10:36:56.114754       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 10:36:56.216582       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 10:36:56.217225       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 10:36:56.218058       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 10:36:56.276653       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 10:36:56.277056       1 server_linux.go:132] "Using iptables Proxier"
	I0929 10:36:56.332280       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 10:36:56.332751       1 server.go:527] "Version info" version="v1.34.0"
	I0929 10:36:56.332790       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:36:56.334977       1 config.go:200] "Starting service config controller"
	I0929 10:36:56.335003       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 10:36:56.335633       1 config.go:309] "Starting node config controller"
	I0929 10:36:56.335645       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 10:36:56.335652       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 10:36:56.335877       1 config.go:106] "Starting endpoint slice config controller"
	I0929 10:36:56.335887       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 10:36:56.335924       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 10:36:56.335932       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 10:36:56.435153       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 10:36:56.436458       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 10:36:56.436501       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7f38046322a2] <==
	I0929 10:36:46.773756       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0929 10:36:46.774579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 10:36:46.774618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0929 10:36:46.774676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 10:36:46.775018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 10:36:46.775287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 10:36:46.775438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 10:36:46.775611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 10:36:46.775732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 10:36:46.775732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 10:36:46.775766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 10:36:46.776166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 10:36:46.776381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 10:36:46.776394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 10:36:46.776523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 10:36:46.776523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 10:36:46.776556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 10:36:46.776598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 10:36:46.776614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 10:36:46.776631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 10:36:47.607010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 10:36:47.607036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 10:36:47.611460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 10:36:47.673666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I0929 10:36:47.973483       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 10:50:38 addons-529420 kubelet[2313]: I0929 10:50:38.140465    2313 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b17506ae-7246-4791-9e59-d2d2220b079a-kube-api-access-wzf5x" (OuterVolumeSpecName: "kube-api-access-wzf5x") pod "b17506ae-7246-4791-9e59-d2d2220b079a" (UID: "b17506ae-7246-4791-9e59-d2d2220b079a"). InnerVolumeSpecName "kube-api-access-wzf5x". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Sep 29 10:50:38 addons-529420 kubelet[2313]: I0929 10:50:38.239563    2313 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/b17506ae-7246-4791-9e59-d2d2220b079a-data\") on node \"addons-529420\" DevicePath \"\""
	Sep 29 10:50:38 addons-529420 kubelet[2313]: I0929 10:50:38.239599    2313 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/b17506ae-7246-4791-9e59-d2d2220b079a-script\") on node \"addons-529420\" DevicePath \"\""
	Sep 29 10:50:38 addons-529420 kubelet[2313]: I0929 10:50:38.239609    2313 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wzf5x\" (UniqueName: \"kubernetes.io/projected/b17506ae-7246-4791-9e59-d2d2220b079a-kube-api-access-wzf5x\") on node \"addons-529420\" DevicePath \"\""
	Sep 29 10:50:40 addons-529420 kubelet[2313]: I0929 10:50:40.727941    2313 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b17506ae-7246-4791-9e59-d2d2220b079a" path="/var/lib/kubelet/pods/b17506ae-7246-4791-9e59-d2d2220b079a/volumes"
	Sep 29 10:50:49 addons-529420 kubelet[2313]: E0929 10:50:49.712949    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="f53396f1-c1a5-46b0-8089-67b5c438ba7b"
	Sep 29 10:50:52 addons-529420 kubelet[2313]: E0929 10:50:52.715731    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="f831007e-4eb0-4f0f-b585-d846fca29f0a"
	Sep 29 10:51:03 addons-529420 kubelet[2313]: I0929 10:51:03.200792    2313 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b76c66aa-e866-4ab6-b41d-165b3795efac-config-volume\") pod \"b76c66aa-e866-4ab6-b41d-165b3795efac\" (UID: \"b76c66aa-e866-4ab6-b41d-165b3795efac\") "
	Sep 29 10:51:03 addons-529420 kubelet[2313]: I0929 10:51:03.200857    2313 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcbgb\" (UniqueName: \"kubernetes.io/projected/b76c66aa-e866-4ab6-b41d-165b3795efac-kube-api-access-hcbgb\") pod \"b76c66aa-e866-4ab6-b41d-165b3795efac\" (UID: \"b76c66aa-e866-4ab6-b41d-165b3795efac\") "
	Sep 29 10:51:03 addons-529420 kubelet[2313]: I0929 10:51:03.201359    2313 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b76c66aa-e866-4ab6-b41d-165b3795efac-config-volume" (OuterVolumeSpecName: "config-volume") pod "b76c66aa-e866-4ab6-b41d-165b3795efac" (UID: "b76c66aa-e866-4ab6-b41d-165b3795efac"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Sep 29 10:51:03 addons-529420 kubelet[2313]: I0929 10:51:03.203256    2313 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b76c66aa-e866-4ab6-b41d-165b3795efac-kube-api-access-hcbgb" (OuterVolumeSpecName: "kube-api-access-hcbgb") pod "b76c66aa-e866-4ab6-b41d-165b3795efac" (UID: "b76c66aa-e866-4ab6-b41d-165b3795efac"). InnerVolumeSpecName "kube-api-access-hcbgb". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Sep 29 10:51:03 addons-529420 kubelet[2313]: I0929 10:51:03.301692    2313 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b76c66aa-e866-4ab6-b41d-165b3795efac-config-volume\") on node \"addons-529420\" DevicePath \"\""
	Sep 29 10:51:03 addons-529420 kubelet[2313]: I0929 10:51:03.301729    2313 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hcbgb\" (UniqueName: \"kubernetes.io/projected/b76c66aa-e866-4ab6-b41d-165b3795efac-kube-api-access-hcbgb\") on node \"addons-529420\" DevicePath \"\""
	Sep 29 10:51:03 addons-529420 kubelet[2313]: E0929 10:51:03.715532    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="f831007e-4eb0-4f0f-b585-d846fca29f0a"
	Sep 29 10:51:04 addons-529420 kubelet[2313]: I0929 10:51:04.041668    2313 scope.go:117] "RemoveContainer" containerID="8cc199db322c98fe2850df1cca2645c77ddd525dd3c1634cdf4f5ee82b4c4196"
	Sep 29 10:51:04 addons-529420 kubelet[2313]: E0929 10:51:04.713610    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="f53396f1-c1a5-46b0-8089-67b5c438ba7b"
	Sep 29 10:51:04 addons-529420 kubelet[2313]: I0929 10:51:04.723224    2313 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b76c66aa-e866-4ab6-b41d-165b3795efac" path="/var/lib/kubelet/pods/b76c66aa-e866-4ab6-b41d-165b3795efac/volumes"
	Sep 29 10:51:16 addons-529420 kubelet[2313]: E0929 10:51:16.731890    2313 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 29 10:51:16 addons-529420 kubelet[2313]: E0929 10:51:16.731956    2313 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 29 10:51:16 addons-529420 kubelet[2313]: E0929 10:51:16.732060    2313 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx_default(f831007e-4eb0-4f0f-b585-d846fca29f0a): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 10:51:16 addons-529420 kubelet[2313]: E0929 10:51:16.732109    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="f831007e-4eb0-4f0f-b585-d846fca29f0a"
	Sep 29 10:51:18 addons-529420 kubelet[2313]: E0929 10:51:18.701926    2313 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 29 10:51:18 addons-529420 kubelet[2313]: E0929 10:51:18.701991    2313 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 29 10:51:18 addons-529420 kubelet[2313]: E0929 10:51:18.702082    2313 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(f53396f1-c1a5-46b0-8089-67b5c438ba7b): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 10:51:18 addons-529420 kubelet[2313]: E0929 10:51:18.702116    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="f53396f1-c1a5-46b0-8089-67b5c438ba7b"
	
	
	==> storage-provisioner [7292e19e493f] <==
	W0929 10:51:06.535086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:51:08.538493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:51:08.542409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:51:10.546103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:51:10.551388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:51:12.554771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:51:12.558940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:51:14.561973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:51:14.567654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:51:16.570698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:51:16.576696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:51:18.579867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:51:18.584014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:51:20.587849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:51:20.593565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:51:22.596985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:51:22.601691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:51:24.605505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:51:24.610201       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:51:26.613630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:51:26.617943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:51:28.621721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:51:28.626370       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:51:30.630906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:51:30.635764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [910e07bc5455] <==
	I0929 10:36:56.614357       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 10:37:26.626869       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-529420 -n addons-529420
helpers_test.go:269: (dbg) Run:  kubectl --context addons-529420 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-tj9td ingress-nginx-admission-patch-ncp6w
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-529420 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-tj9td ingress-nginx-admission-patch-ncp6w
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-529420 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-tj9td ingress-nginx-admission-patch-ncp6w: exit status 1 (93.539307ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-529420/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 10:45:37 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.31
	IPs:
	  IP:  10.244.0.31
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nbt6z (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-nbt6z:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m55s                  default-scheduler  Successfully assigned default/nginx to addons-529420
	  Normal   Pulling    3m (x5 over 5m54s)     kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2m59s (x5 over 5m53s)  kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m59s (x5 over 5m53s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    40s (x21 over 5m53s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     40s (x21 over 5m53s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-529420/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 10:45:29 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xxqpd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-xxqpd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m3s                   default-scheduler  Successfully assigned default/task-pv-pod to addons-529420
	  Warning  Failed     6m1s                   kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m57s (x5 over 6m2s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     2m56s (x5 over 6m1s)   kubelet            Error: ErrImagePull
	  Warning  Failed     2m56s (x4 over 5m45s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     55s (x20 over 6m)      kubelet            Error: ImagePullBackOff
	  Normal   BackOff    43s (x21 over 6m)      kubelet            Back-off pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-29hwq (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-29hwq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-tj9td" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-ncp6w" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-529420 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-tj9td ingress-nginx-admission-patch-ncp6w: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-529420 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-529420 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-529420 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.555669102s)
--- FAIL: TestAddons/parallel/CSI (389.19s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (345.1s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-529420 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-529420 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Non-zero exit: kubectl --context addons-529420 get pvc test-pvc -o jsonpath={.status.phase} -n default: context deadline exceeded (1.119µs)
helpers_test.go:404: TestAddons/parallel/LocalPath: WARNING: PVC get for "default" "test-pvc" returned: context deadline exceeded
addons_test.go:960: failed waiting for PVC test-pvc: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/LocalPath]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/LocalPath]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-529420
helpers_test.go:243: (dbg) docker inspect addons-529420:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c119ca8ea69036edc3cadb6923f83c5ce0ad278d98abd07121e74c6603626b70",
	        "Created": "2025-09-29T10:36:32.534326886Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 809726,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T10:36:32.572013527Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/c119ca8ea69036edc3cadb6923f83c5ce0ad278d98abd07121e74c6603626b70/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c119ca8ea69036edc3cadb6923f83c5ce0ad278d98abd07121e74c6603626b70/hostname",
	        "HostsPath": "/var/lib/docker/containers/c119ca8ea69036edc3cadb6923f83c5ce0ad278d98abd07121e74c6603626b70/hosts",
	        "LogPath": "/var/lib/docker/containers/c119ca8ea69036edc3cadb6923f83c5ce0ad278d98abd07121e74c6603626b70/c119ca8ea69036edc3cadb6923f83c5ce0ad278d98abd07121e74c6603626b70-json.log",
	        "Name": "/addons-529420",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-529420:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-529420",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c119ca8ea69036edc3cadb6923f83c5ce0ad278d98abd07121e74c6603626b70",
	                "LowerDir": "/var/lib/docker/overlay2/6a6aa4b945b92dc7997627af83b36f77536ae206c022f4b5596a3201308a5d21-init/diff:/var/lib/docker/overlay2/8785ead96546661033f04e32798b1ac8b184b1c04fc8862515b87d56909b6b9d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6a6aa4b945b92dc7997627af83b36f77536ae206c022f4b5596a3201308a5d21/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6a6aa4b945b92dc7997627af83b36f77536ae206c022f4b5596a3201308a5d21/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6a6aa4b945b92dc7997627af83b36f77536ae206c022f4b5596a3201308a5d21/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-529420",
	                "Source": "/var/lib/docker/volumes/addons-529420/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-529420",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-529420",
	                "name.minikube.sigs.k8s.io": "addons-529420",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "198f81b793e83c57ca2854cf8d1d0464141313e0495f23d0367891bbb8b05aa2",
	            "SandboxKey": "/var/run/docker/netns/198f81b793e8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33519"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33520"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33523"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33521"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33522"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-529420": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:86:fe:b5:4c:80",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3724749a501df880fce098abbef40c1fc5e7d4b0ca633d7b7b33237babda66c6",
	                    "EndpointID": "381be5dcde6efd6127536cd89de924bde7f3b532aa3b84038bdaf13a91eea4d0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-529420",
	                        "c119ca8ea690"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-529420 -n addons-529420
helpers_test.go:252: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-529420 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-529420 logs -n 25: (1.019491466s)
helpers_test.go:260: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                    ARGS                                                                                                                                                                                                                                    │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ minikube               │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │ 29 Sep 25 10:36 UTC │
	│ delete  │ -p download-only-239254                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-239254   │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │ 29 Sep 25 10:36 UTC │
	│ delete  │ -p download-only-079557                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-079557   │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │ 29 Sep 25 10:36 UTC │
	│ delete  │ -p download-only-239254                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-239254   │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │ 29 Sep 25 10:36 UTC │
	│ start   │ --download-only -p download-docker-537454 --alsologtostderr --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-537454 │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │                     │
	│ delete  │ -p download-docker-537454                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-docker-537454 │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │ 29 Sep 25 10:36 UTC │
	│ start   │ --download-only -p binary-mirror-606832 --alsologtostderr --binary-mirror http://127.0.0.1:41679 --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                               │ binary-mirror-606832   │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │                     │
	│ delete  │ -p binary-mirror-606832                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ binary-mirror-606832   │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │ 29 Sep 25 10:36 UTC │
	│ addons  │ enable dashboard -p addons-529420                                                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │                     │
	│ addons  │ disable dashboard -p addons-529420                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │                     │
	│ start   │ -p addons-529420 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │ 29 Sep 25 10:38 UTC │
	│ addons  │ addons-529420 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:44 UTC │ 29 Sep 25 10:44 UTC │
	│ addons  │ addons-529420 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ addons  │ enable headlamp -p addons-529420 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ addons  │ addons-529420 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ addons  │ addons-529420 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ addons  │ addons-529420 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ ip      │ addons-529420 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ addons  │ addons-529420 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ addons  │ addons-529420 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ addons  │ addons-529420 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ addons  │ addons-529420 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:47 UTC │ 29 Sep 25 10:47 UTC │
	│ addons  │ addons-529420 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:47 UTC │ 29 Sep 25 10:47 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-529420                                                                                                                                                                                                                                                                                                                                                                                             │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:47 UTC │ 29 Sep 25 10:47 UTC │
	│ addons  │ addons-529420 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:47 UTC │ 29 Sep 25 10:47 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 10:36:12
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 10:36:12.344032  809094 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:36:12.344307  809094 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:36:12.344317  809094 out.go:374] Setting ErrFile to fd 2...
	I0929 10:36:12.344322  809094 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:36:12.344556  809094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-804126/.minikube/bin
	I0929 10:36:12.345145  809094 out.go:368] Setting JSON to false
	I0929 10:36:12.346064  809094 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":8313,"bootTime":1759133859,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:36:12.346178  809094 start.go:140] virtualization: kvm guest
	I0929 10:36:12.347964  809094 out.go:179] * [addons-529420] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 10:36:12.349153  809094 out.go:179]   - MINIKUBE_LOCATION=21656
	I0929 10:36:12.349158  809094 notify.go:220] Checking for updates...
	I0929 10:36:12.351412  809094 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:36:12.352567  809094 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21656-804126/kubeconfig
	I0929 10:36:12.353660  809094 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-804126/.minikube
	I0929 10:36:12.354776  809094 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 10:36:12.356142  809094 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 10:36:12.357739  809094 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:36:12.381211  809094 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 10:36:12.381322  809094 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:36:12.437520  809094 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-29 10:36:12.425260973 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:36:12.437676  809094 docker.go:318] overlay module found
	I0929 10:36:12.439238  809094 out.go:179] * Using the docker driver based on user configuration
	I0929 10:36:12.440251  809094 start.go:304] selected driver: docker
	I0929 10:36:12.440267  809094 start.go:924] validating driver "docker" against <nil>
	I0929 10:36:12.440280  809094 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 10:36:12.440926  809094 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:36:12.500513  809094 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-29 10:36:12.490511662 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:36:12.500726  809094 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 10:36:12.500930  809094 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 10:36:12.502519  809094 out.go:179] * Using Docker driver with root privileges
	I0929 10:36:12.503551  809094 cni.go:84] Creating CNI manager for ""
	I0929 10:36:12.503623  809094 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 10:36:12.503635  809094 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0929 10:36:12.503733  809094 start.go:348] cluster config:
	{Name:addons-529420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-529420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netwo
rkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1
m0s}
	I0929 10:36:12.504920  809094 out.go:179] * Starting "addons-529420" primary control-plane node in "addons-529420" cluster
	I0929 10:36:12.505901  809094 cache.go:123] Beginning downloading kic base image for docker with docker
	I0929 10:36:12.506980  809094 out.go:179] * Pulling base image v0.0.48 ...
	I0929 10:36:12.508072  809094 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 10:36:12.508109  809094 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 10:36:12.508127  809094 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21656-804126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0929 10:36:12.508138  809094 cache.go:58] Caching tarball of preloaded images
	I0929 10:36:12.508233  809094 preload.go:172] Found /home/jenkins/minikube-integration/21656-804126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0929 10:36:12.508247  809094 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0929 10:36:12.508686  809094 profile.go:143] Saving config to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/config.json ...
	I0929 10:36:12.508722  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/config.json: {Name:mkf2bdb1c1bab8c96c543722d15e10cff077f479 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:12.525703  809094 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 10:36:12.525838  809094 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0929 10:36:12.525859  809094 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0929 10:36:12.525863  809094 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0929 10:36:12.525870  809094 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0929 10:36:12.525876  809094 cache.go:165] Loading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from local cache
	I0929 10:36:25.658111  809094 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from cached tarball
	I0929 10:36:25.658161  809094 cache.go:232] Successfully downloaded all kic artifacts
	I0929 10:36:25.658213  809094 start.go:360] acquireMachinesLock for addons-529420: {Name:mkb1d52da565bdfba9b5d34f2b7f2b7e0a10b5f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 10:36:25.658335  809094 start.go:364] duration metric: took 99.814µs to acquireMachinesLock for "addons-529420"
	I0929 10:36:25.658362  809094 start.go:93] Provisioning new machine with config: &{Name:addons-529420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-529420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 10:36:25.658452  809094 start.go:125] createHost starting for "" (driver="docker")
	I0929 10:36:25.660487  809094 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0929 10:36:25.660770  809094 start.go:159] libmachine.API.Create for "addons-529420" (driver="docker")
	I0929 10:36:25.660815  809094 client.go:168] LocalClient.Create starting
	I0929 10:36:25.660956  809094 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca.pem
	I0929 10:36:25.875289  809094 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/cert.pem
	I0929 10:36:26.406689  809094 cli_runner.go:164] Run: docker network inspect addons-529420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0929 10:36:26.424628  809094 cli_runner.go:211] docker network inspect addons-529420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0929 10:36:26.424709  809094 network_create.go:284] running [docker network inspect addons-529420] to gather additional debugging logs...
	I0929 10:36:26.424730  809094 cli_runner.go:164] Run: docker network inspect addons-529420
	W0929 10:36:26.443146  809094 cli_runner.go:211] docker network inspect addons-529420 returned with exit code 1
	I0929 10:36:26.443184  809094 network_create.go:287] error running [docker network inspect addons-529420]: docker network inspect addons-529420: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-529420 not found
	I0929 10:36:26.443196  809094 network_create.go:289] output of [docker network inspect addons-529420]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-529420 not found
	
	** /stderr **
	I0929 10:36:26.443292  809094 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 10:36:26.462904  809094 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001bea4c0}
	I0929 10:36:26.462945  809094 network_create.go:124] attempt to create docker network addons-529420 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0929 10:36:26.462999  809094 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-529420 addons-529420
	I0929 10:36:26.525434  809094 network_create.go:108] docker network addons-529420 192.168.49.0/24 created
	I0929 10:36:26.525484  809094 kic.go:121] calculated static IP "192.168.49.2" for the "addons-529420" container
	I0929 10:36:26.525595  809094 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0929 10:36:26.543028  809094 cli_runner.go:164] Run: docker volume create addons-529420 --label name.minikube.sigs.k8s.io=addons-529420 --label created_by.minikube.sigs.k8s.io=true
	I0929 10:36:26.562375  809094 oci.go:103] Successfully created a docker volume addons-529420
	I0929 10:36:26.562495  809094 cli_runner.go:164] Run: docker run --rm --name addons-529420-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-529420 --entrypoint /usr/bin/test -v addons-529420:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0929 10:36:28.727983  809094 cli_runner.go:217] Completed: docker run --rm --name addons-529420-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-529420 --entrypoint /usr/bin/test -v addons-529420:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib: (2.165422344s)
	I0929 10:36:28.728035  809094 oci.go:107] Successfully prepared a docker volume addons-529420
	I0929 10:36:28.728077  809094 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 10:36:28.728109  809094 kic.go:194] Starting extracting preloaded images to volume ...
	I0929 10:36:28.728393  809094 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21656-804126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-529420:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0929 10:36:32.463008  809094 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21656-804126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-529420:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.734545448s)
	I0929 10:36:32.463043  809094 kic.go:203] duration metric: took 3.734930316s to extract preloaded images to volume ...
	W0929 10:36:32.463138  809094 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0929 10:36:32.463171  809094 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0929 10:36:32.463213  809094 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0929 10:36:32.517633  809094 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-529420 --name addons-529420 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-529420 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-529420 --network addons-529420 --ip 192.168.49.2 --volume addons-529420:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0929 10:36:32.782605  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Running}}
	I0929 10:36:32.802682  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:32.821845  809094 cli_runner.go:164] Run: docker exec addons-529420 stat /var/lib/dpkg/alternatives/iptables
	I0929 10:36:32.871577  809094 oci.go:144] the created container "addons-529420" has a running status.
	I0929 10:36:32.871612  809094 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa...
	I0929 10:36:33.119788  809094 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0929 10:36:33.151014  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:33.170919  809094 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0929 10:36:33.170942  809094 kic_runner.go:114] Args: [docker exec --privileged addons-529420 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0929 10:36:33.215532  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:33.236236  809094 machine.go:93] provisionDockerMachine start ...
	I0929 10:36:33.236361  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:33.258174  809094 main.go:141] libmachine: Using SSH client type: native
	I0929 10:36:33.258466  809094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33519 <nil> <nil>}
	I0929 10:36:33.258486  809094 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 10:36:33.399100  809094 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-529420
	
	I0929 10:36:33.399136  809094 ubuntu.go:182] provisioning hostname "addons-529420"
	I0929 10:36:33.399203  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:33.418829  809094 main.go:141] libmachine: Using SSH client type: native
	I0929 10:36:33.419101  809094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33519 <nil> <nil>}
	I0929 10:36:33.419124  809094 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-529420 && echo "addons-529420" | sudo tee /etc/hostname
	I0929 10:36:33.570695  809094 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-529420
	
	I0929 10:36:33.570813  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:33.590267  809094 main.go:141] libmachine: Using SSH client type: native
	I0929 10:36:33.590566  809094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33519 <nil> <nil>}
	I0929 10:36:33.590590  809094 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-529420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-529420/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-529420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 10:36:33.728831  809094 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 10:36:33.728862  809094 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21656-804126/.minikube CaCertPath:/home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21656-804126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21656-804126/.minikube}
	I0929 10:36:33.728905  809094 ubuntu.go:190] setting up certificates
	I0929 10:36:33.728918  809094 provision.go:84] configureAuth start
	I0929 10:36:33.728972  809094 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-529420
	I0929 10:36:33.748261  809094 provision.go:143] copyHostCerts
	I0929 10:36:33.748352  809094 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21656-804126/.minikube/ca.pem (1082 bytes)
	I0929 10:36:33.748507  809094 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21656-804126/.minikube/cert.pem (1123 bytes)
	I0929 10:36:33.748577  809094 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21656-804126/.minikube/key.pem (1679 bytes)
	I0929 10:36:33.748631  809094 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21656-804126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca-key.pem org=jenkins.addons-529420 san=[127.0.0.1 192.168.49.2 addons-529420 localhost minikube]
	I0929 10:36:33.869211  809094 provision.go:177] copyRemoteCerts
	I0929 10:36:33.869279  809094 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 10:36:33.869324  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:33.888791  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:33.988019  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 10:36:34.017500  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 10:36:34.045429  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 10:36:34.073342  809094 provision.go:87] duration metric: took 344.406928ms to configureAuth
	I0929 10:36:34.073378  809094 ubuntu.go:206] setting minikube options for container-runtime
	I0929 10:36:34.073606  809094 config.go:182] Loaded profile config "addons-529420": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 10:36:34.073667  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:34.093622  809094 main.go:141] libmachine: Using SSH client type: native
	I0929 10:36:34.093924  809094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33519 <nil> <nil>}
	I0929 10:36:34.093946  809094 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0929 10:36:34.233714  809094 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0929 10:36:34.233744  809094 ubuntu.go:71] root file system type: overlay
	I0929 10:36:34.233863  809094 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0929 10:36:34.233924  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:34.253864  809094 main.go:141] libmachine: Using SSH client type: native
	I0929 10:36:34.254082  809094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33519 <nil> <nil>}
	I0929 10:36:34.254160  809094 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0929 10:36:34.407267  809094 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0929 10:36:34.407373  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:34.426330  809094 main.go:141] libmachine: Using SSH client type: native
	I0929 10:36:34.426664  809094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33519 <nil> <nil>}
	I0929 10:36:34.426694  809094 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0929 10:36:35.600389  809094 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-29 10:36:34.405290761 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0929 10:36:35.600442  809094 machine.go:96] duration metric: took 2.364149235s to provisionDockerMachine
	I0929 10:36:35.600461  809094 client.go:171] duration metric: took 9.939636569s to LocalClient.Create
	I0929 10:36:35.600487  809094 start.go:167] duration metric: took 9.939718481s to libmachine.API.Create "addons-529420"
	I0929 10:36:35.600498  809094 start.go:293] postStartSetup for "addons-529420" (driver="docker")
	I0929 10:36:35.600507  809094 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 10:36:35.600574  809094 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 10:36:35.600613  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:35.620381  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:35.722159  809094 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 10:36:35.726032  809094 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 10:36:35.726071  809094 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 10:36:35.726079  809094 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 10:36:35.726086  809094 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 10:36:35.726099  809094 filesync.go:126] Scanning /home/jenkins/minikube-integration/21656-804126/.minikube/addons for local assets ...
	I0929 10:36:35.726162  809094 filesync.go:126] Scanning /home/jenkins/minikube-integration/21656-804126/.minikube/files for local assets ...
	I0929 10:36:35.726189  809094 start.go:296] duration metric: took 125.685891ms for postStartSetup
	I0929 10:36:35.726542  809094 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-529420
	I0929 10:36:35.746842  809094 profile.go:143] Saving config to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/config.json ...
	I0929 10:36:35.747139  809094 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 10:36:35.747190  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:35.766493  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:35.861001  809094 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 10:36:35.865987  809094 start.go:128] duration metric: took 10.207507252s to createHost
	I0929 10:36:35.866014  809094 start.go:83] releasing machines lock for "addons-529420", held for 10.207665879s
	I0929 10:36:35.866077  809094 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-529420
	I0929 10:36:35.884982  809094 ssh_runner.go:195] Run: cat /version.json
	I0929 10:36:35.885044  809094 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 10:36:35.885050  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:35.885102  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:35.904087  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:35.905861  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:36.079122  809094 ssh_runner.go:195] Run: systemctl --version
	I0929 10:36:36.084194  809094 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 10:36:36.089268  809094 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 10:36:36.123533  809094 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 10:36:36.123613  809094 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 10:36:36.154267  809094 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0929 10:36:36.154304  809094 start.go:495] detecting cgroup driver to use...
	I0929 10:36:36.154338  809094 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 10:36:36.154522  809094 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 10:36:36.173730  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 10:36:36.186991  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 10:36:36.198504  809094 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0929 10:36:36.198583  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0929 10:36:36.210007  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 10:36:36.221306  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 10:36:36.232714  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 10:36:36.244397  809094 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 10:36:36.255307  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 10:36:36.266876  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 10:36:36.278201  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 10:36:36.289714  809094 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 10:36:36.299414  809094 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 10:36:36.309736  809094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:36:36.379690  809094 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 10:36:36.458212  809094 start.go:495] detecting cgroup driver to use...
	I0929 10:36:36.458261  809094 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 10:36:36.458313  809094 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0929 10:36:36.471829  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 10:36:36.484463  809094 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 10:36:36.501567  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 10:36:36.514414  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 10:36:36.527777  809094 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 10:36:36.546772  809094 ssh_runner.go:195] Run: which cri-dockerd
	I0929 10:36:36.550960  809094 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0929 10:36:36.562724  809094 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0929 10:36:36.582736  809094 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0929 10:36:36.653862  809094 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0929 10:36:36.724043  809094 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0929 10:36:36.724204  809094 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0929 10:36:36.744794  809094 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 10:36:36.757465  809094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:36:36.826884  809094 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 10:36:37.610007  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 10:36:37.622898  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0929 10:36:37.636301  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 10:36:37.649738  809094 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0929 10:36:37.722666  809094 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0929 10:36:37.796505  809094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:36:37.867566  809094 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0929 10:36:37.892266  809094 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0929 10:36:37.904989  809094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:36:37.972029  809094 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0929 10:36:38.051455  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 10:36:38.065182  809094 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0929 10:36:38.065259  809094 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0929 10:36:38.069600  809094 start.go:563] Will wait 60s for crictl version
	I0929 10:36:38.069675  809094 ssh_runner.go:195] Run: which crictl
	I0929 10:36:38.073569  809094 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 10:36:38.110640  809094 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0929 10:36:38.110713  809094 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 10:36:38.140525  809094 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 10:36:38.168486  809094 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0929 10:36:38.168581  809094 cli_runner.go:164] Run: docker network inspect addons-529420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 10:36:38.186777  809094 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0929 10:36:38.191260  809094 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 10:36:38.203969  809094 kubeadm.go:875] updating cluster {Name:addons-529420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-529420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 10:36:38.204104  809094 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 10:36:38.204155  809094 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 10:36:38.227008  809094 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0929 10:36:38.227036  809094 docker.go:621] Images already preloaded, skipping extraction
	I0929 10:36:38.227106  809094 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 10:36:38.249601  809094 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0929 10:36:38.249632  809094 cache_images.go:85] Images are preloaded, skipping loading
	I0929 10:36:38.249646  809094 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0929 10:36:38.249774  809094 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-529420 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-529420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 10:36:38.249838  809094 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0929 10:36:38.304936  809094 cni.go:84] Creating CNI manager for ""
	I0929 10:36:38.304974  809094 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 10:36:38.305005  809094 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 10:36:38.305052  809094 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-529420 NodeName:addons-529420 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 10:36:38.305228  809094 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-529420"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 10:36:38.305304  809094 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 10:36:38.315730  809094 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 10:36:38.315813  809094 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 10:36:38.326318  809094 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0929 10:36:38.346540  809094 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 10:36:38.366500  809094 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I0929 10:36:38.386461  809094 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0929 10:36:38.390498  809094 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 10:36:38.403052  809094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:36:38.473834  809094 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 10:36:38.499163  809094 certs.go:68] Setting up /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420 for IP: 192.168.49.2
	I0929 10:36:38.499185  809094 certs.go:194] generating shared ca certs ...
	I0929 10:36:38.499202  809094 certs.go:226] acquiring lock for ca certs: {Name:mkc1a6b9be22ca2c6acbf4ae075c8649cd7bb029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:38.499334  809094 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21656-804126/.minikube/ca.key
	I0929 10:36:38.646613  809094 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-804126/.minikube/ca.crt ...
	I0929 10:36:38.646650  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/ca.crt: {Name:mk6af424fad818aae2ecb63097cb3cf4fa072fda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:38.646838  809094 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-804126/.minikube/ca.key ...
	I0929 10:36:38.646852  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/ca.key: {Name:mk82f55830669e727af6d6e5b82afa28223d781a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:38.646926  809094 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21656-804126/.minikube/proxy-client-ca.key
	I0929 10:36:38.721965  809094 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-804126/.minikube/proxy-client-ca.crt ...
	I0929 10:36:38.721996  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/proxy-client-ca.crt: {Name:mkfb52e92ed776fc971f1dc2b44df4c3ddc05e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:38.722184  809094 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-804126/.minikube/proxy-client-ca.key ...
	I0929 10:36:38.722197  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/proxy-client-ca.key: {Name:mk1e237f25fee0cc4ff615a96b95c3d6470c2cbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:38.722265  809094 certs.go:256] generating profile certs ...
	I0929 10:36:38.722327  809094 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.key
	I0929 10:36:38.722341  809094 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt with IP's: []
	I0929 10:36:39.380758  809094 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt ...
	I0929 10:36:39.380790  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt: {Name:mkc77ef14ddf2986f76245d42c12726cef377eb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:39.380976  809094 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.key ...
	I0929 10:36:39.380988  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.key: {Name:mke6b630fea371c2903c210193a2d24b7bb1f06f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:39.381077  809094 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.key.5ad07b27
	I0929 10:36:39.381099  809094 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.crt.5ad07b27 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0929 10:36:39.520731  809094 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.crt.5ad07b27 ...
	I0929 10:36:39.520767  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.crt.5ad07b27: {Name:mkcca52b9df73c7117f5aaacf4f623dd6c9d43c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:39.520941  809094 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.key.5ad07b27 ...
	I0929 10:36:39.520956  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.key.5ad07b27: {Name:mk26b5e71bc5d928280847a07112275ef7de7abe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:39.521026  809094 certs.go:381] copying /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.crt.5ad07b27 -> /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.crt
	I0929 10:36:39.521153  809094 certs.go:385] copying /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.key.5ad07b27 -> /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.key
	I0929 10:36:39.521216  809094 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.key
	I0929 10:36:39.521237  809094 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.crt with IP's: []
	I0929 10:36:39.588037  809094 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.crt ...
	I0929 10:36:39.588071  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.crt: {Name:mkf924850f5da9a86daf75692dc66662efd3fc9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:39.588249  809094 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.key ...
	I0929 10:36:39.588264  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.key: {Name:mkecb516723c573c2dd57316eed6cd1d6263ba08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:39.588475  809094 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 10:36:39.588513  809094 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca.pem (1082 bytes)
	I0929 10:36:39.588537  809094 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/cert.pem (1123 bytes)
	I0929 10:36:39.588557  809094 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/key.pem (1679 bytes)
	I0929 10:36:39.589229  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 10:36:39.617084  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0929 10:36:39.644094  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 10:36:39.671391  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 10:36:39.698770  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0929 10:36:39.726395  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 10:36:39.753719  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 10:36:39.781288  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 10:36:39.808573  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 10:36:39.838955  809094 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 10:36:39.859027  809094 ssh_runner.go:195] Run: openssl version
	I0929 10:36:39.865345  809094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 10:36:39.879241  809094 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:36:39.883476  809094 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:36:39.883550  809094 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:36:39.891125  809094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 10:36:39.902246  809094 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 10:36:39.906800  809094 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 10:36:39.906853  809094 kubeadm.go:392] StartCluster: {Name:addons-529420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-529420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketV
MnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:36:39.907037  809094 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0929 10:36:39.927225  809094 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 10:36:39.937579  809094 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 10:36:39.948182  809094 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0929 10:36:39.948260  809094 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 10:36:39.958474  809094 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 10:36:39.958499  809094 kubeadm.go:157] found existing configuration files:
	
	I0929 10:36:39.958547  809094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 10:36:39.968562  809094 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 10:36:39.968615  809094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 10:36:39.978358  809094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 10:36:39.988499  809094 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 10:36:39.988557  809094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 10:36:39.998306  809094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 10:36:40.008316  809094 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 10:36:40.008394  809094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 10:36:40.018692  809094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 10:36:40.028624  809094 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 10:36:40.028694  809094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 10:36:40.038485  809094 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0929 10:36:40.113436  809094 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1040-gcp\n", err: exit status 1
	I0929 10:36:40.172082  809094 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 10:36:49.487259  809094 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0929 10:36:49.487349  809094 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 10:36:49.487474  809094 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0929 10:36:49.487549  809094 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1040-gcp
	I0929 10:36:49.487599  809094 kubeadm.go:310] OS: Linux
	I0929 10:36:49.487674  809094 kubeadm.go:310] CGROUPS_CPU: enabled
	I0929 10:36:49.487738  809094 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0929 10:36:49.487808  809094 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0929 10:36:49.487877  809094 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0929 10:36:49.487945  809094 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0929 10:36:49.488009  809094 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0929 10:36:49.488084  809094 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0929 10:36:49.488155  809094 kubeadm.go:310] CGROUPS_IO: enabled
	I0929 10:36:49.488236  809094 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 10:36:49.488334  809094 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 10:36:49.488489  809094 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 10:36:49.488588  809094 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0929 10:36:49.490162  809094 out.go:252]   - Generating certificates and keys ...
	I0929 10:36:49.490252  809094 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 10:36:49.490322  809094 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 10:36:49.490390  809094 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 10:36:49.490463  809094 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 10:36:49.490546  809094 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 10:36:49.490627  809094 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 10:36:49.490693  809094 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 10:36:49.490785  809094 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-529420 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 10:36:49.490833  809094 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 10:36:49.490923  809094 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-529420 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 10:36:49.490993  809094 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 10:36:49.491078  809094 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 10:36:49.491125  809094 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 10:36:49.491174  809094 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 10:36:49.491224  809094 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 10:36:49.491279  809094 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 10:36:49.491356  809094 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 10:36:49.491494  809094 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 10:36:49.491545  809094 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 10:36:49.491621  809094 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 10:36:49.491692  809094 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 10:36:49.493671  809094 out.go:252]   - Booting up control plane ...
	I0929 10:36:49.493764  809094 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 10:36:49.493839  809094 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 10:36:49.493913  809094 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 10:36:49.494005  809094 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 10:36:49.494093  809094 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 10:36:49.494209  809094 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 10:36:49.494311  809094 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 10:36:49.494360  809094 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 10:36:49.494507  809094 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 10:36:49.494615  809094 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0929 10:36:49.494666  809094 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001039286s
	I0929 10:36:49.494753  809094 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 10:36:49.494843  809094 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0929 10:36:49.494970  809094 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 10:36:49.495066  809094 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 10:36:49.495136  809094 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.704142157s
	I0929 10:36:49.495206  809094 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.204406716s
	I0929 10:36:49.495290  809094 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 3.502382535s
	I0929 10:36:49.495410  809094 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 10:36:49.495564  809094 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 10:36:49.495618  809094 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 10:36:49.495799  809094 kubeadm.go:310] [mark-control-plane] Marking the node addons-529420 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 10:36:49.495870  809094 kubeadm.go:310] [bootstrap-token] Using token: yuv8vz.ch07v7kt1fq2e5wb
	I0929 10:36:49.498139  809094 out.go:252]   - Configuring RBAC rules ...
	I0929 10:36:49.498247  809094 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 10:36:49.498343  809094 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 10:36:49.498483  809094 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 10:36:49.498597  809094 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 10:36:49.498726  809094 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 10:36:49.498844  809094 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 10:36:49.498986  809094 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 10:36:49.499027  809094 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 10:36:49.499068  809094 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 10:36:49.499073  809094 kubeadm.go:310] 
	I0929 10:36:49.499124  809094 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 10:36:49.499129  809094 kubeadm.go:310] 
	I0929 10:36:49.499199  809094 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 10:36:49.499205  809094 kubeadm.go:310] 
	I0929 10:36:49.499230  809094 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 10:36:49.499295  809094 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 10:36:49.499340  809094 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 10:36:49.499345  809094 kubeadm.go:310] 
	I0929 10:36:49.499394  809094 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 10:36:49.499400  809094 kubeadm.go:310] 
	I0929 10:36:49.499496  809094 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 10:36:49.499503  809094 kubeadm.go:310] 
	I0929 10:36:49.499547  809094 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 10:36:49.499632  809094 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 10:36:49.499694  809094 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 10:36:49.499700  809094 kubeadm.go:310] 
	I0929 10:36:49.499768  809094 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 10:36:49.499834  809094 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 10:36:49.499840  809094 kubeadm.go:310] 
	I0929 10:36:49.499917  809094 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token yuv8vz.ch07v7kt1fq2e5wb \
	I0929 10:36:49.500007  809094 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:52ed2ed4898a1f91b8e8d0663d1b9110ba480fd365338f5c4892399f9ccfb123 \
	I0929 10:36:49.500027  809094 kubeadm.go:310] 	--control-plane 
	I0929 10:36:49.500033  809094 kubeadm.go:310] 
	I0929 10:36:49.500105  809094 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 10:36:49.500110  809094 kubeadm.go:310] 
	I0929 10:36:49.500183  809094 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token yuv8vz.ch07v7kt1fq2e5wb \
	I0929 10:36:49.500291  809094 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:52ed2ed4898a1f91b8e8d0663d1b9110ba480fd365338f5c4892399f9ccfb123 
	I0929 10:36:49.500305  809094 cni.go:84] Creating CNI manager for ""
	I0929 10:36:49.500322  809094 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 10:36:49.501607  809094 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0929 10:36:49.502541  809094 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0929 10:36:49.513883  809094 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0929 10:36:49.534107  809094 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 10:36:49.534178  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:49.534217  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-529420 minikube.k8s.io/updated_at=2025_09_29T10_36_49_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c1f958e1d15faaa2b94ae7399d1155627e45fcf8 minikube.k8s.io/name=addons-529420 minikube.k8s.io/primary=true
	I0929 10:36:49.634858  809094 ops.go:34] apiserver oom_adj: -16
	I0929 10:36:49.634879  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:50.135279  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:50.635199  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:51.135670  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:51.635820  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:52.135653  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:52.635900  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:53.135655  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:53.635560  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:53.702327  809094 kubeadm.go:1105] duration metric: took 4.168204298s to wait for elevateKubeSystemPrivileges
	I0929 10:36:53.702374  809094 kubeadm.go:394] duration metric: took 13.795525537s to StartCluster
	I0929 10:36:53.702401  809094 settings.go:142] acquiring lock: {Name:mkba08796c8ac17f1f94d6f1a6799aa6725155e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:53.702549  809094 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21656-804126/kubeconfig
	I0929 10:36:53.703073  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/kubeconfig: {Name:mkae1c4acd5fb2ec6cdb06d7e9593ac36dd4ccbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:53.703279  809094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 10:36:53.703300  809094 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 10:36:53.703359  809094 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0929 10:36:53.703546  809094 addons.go:69] Setting ingress=true in profile "addons-529420"
	I0929 10:36:53.703557  809094 addons.go:69] Setting metrics-server=true in profile "addons-529420"
	I0929 10:36:53.703575  809094 addons.go:69] Setting volcano=true in profile "addons-529420"
	I0929 10:36:53.703577  809094 addons.go:238] Setting addon ingress=true in "addons-529420"
	I0929 10:36:53.703583  809094 addons.go:238] Setting addon metrics-server=true in "addons-529420"
	I0929 10:36:53.703600  809094 addons.go:238] Setting addon volcano=true in "addons-529420"
	I0929 10:36:53.703611  809094 config.go:182] Loaded profile config "addons-529420": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 10:36:53.703623  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703615  809094 addons.go:69] Setting storage-provisioner=true in profile "addons-529420"
	I0929 10:36:53.703640  809094 addons.go:69] Setting inspektor-gadget=true in profile "addons-529420"
	I0929 10:36:53.703649  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703650  809094 addons.go:238] Setting addon inspektor-gadget=true in "addons-529420"
	I0929 10:36:53.703670  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703670  809094 addons.go:69] Setting default-storageclass=true in profile "addons-529420"
	I0929 10:36:53.703673  809094 addons.go:238] Setting addon storage-provisioner=true in "addons-529420"
	I0929 10:36:53.703685  809094 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-529420"
	I0929 10:36:53.703665  809094 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-529420"
	I0929 10:36:53.703742  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703752  809094 addons.go:69] Setting registry=true in profile "addons-529420"
	I0929 10:36:53.703784  809094 addons.go:238] Setting addon registry=true in "addons-529420"
	I0929 10:36:53.703797  809094 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-529420"
	I0929 10:36:53.703816  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703839  809094 addons.go:69] Setting registry-creds=true in profile "addons-529420"
	I0929 10:36:53.703853  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703864  809094 addons.go:238] Setting addon registry-creds=true in "addons-529420"
	I0929 10:36:53.703888  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.704091  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704229  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704236  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704241  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704269  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704280  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704369  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704383  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704822  809094 addons.go:69] Setting gcp-auth=true in profile "addons-529420"
	I0929 10:36:53.704853  809094 mustload.go:65] Loading cluster: addons-529420
	I0929 10:36:53.704946  809094 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-529420"
	I0929 10:36:53.704967  809094 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-529420"
	I0929 10:36:53.705010  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.705063  809094 config.go:182] Loaded profile config "addons-529420": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 10:36:53.705314  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.705514  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.705877  809094 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-529420"
	I0929 10:36:53.705901  809094 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-529420"
	I0929 10:36:53.705927  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.706142  809094 out.go:179] * Verifying Kubernetes components...
	I0929 10:36:53.706320  809094 addons.go:69] Setting volumesnapshots=true in profile "addons-529420"
	I0929 10:36:53.706369  809094 addons.go:238] Setting addon volumesnapshots=true in "addons-529420"
	I0929 10:36:53.706411  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703632  809094 addons.go:69] Setting ingress-dns=true in profile "addons-529420"
	I0929 10:36:53.706720  809094 addons.go:238] Setting addon ingress-dns=true in "addons-529420"
	I0929 10:36:53.706754  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.707040  809094 addons.go:69] Setting cloud-spanner=true in profile "addons-529420"
	I0929 10:36:53.707069  809094 addons.go:238] Setting addon cloud-spanner=true in "addons-529420"
	I0929 10:36:53.707096  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703568  809094 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-529420"
	I0929 10:36:53.707458  809094 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-529420"
	I0929 10:36:53.707552  809094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:36:53.703557  809094 addons.go:69] Setting yakd=true in profile "addons-529420"
	I0929 10:36:53.707960  809094 addons.go:238] Setting addon yakd=true in "addons-529420"
	I0929 10:36:53.707999  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703623  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.714143  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.714652  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.719013  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.719122  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.719970  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.722035  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.723786  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.778757  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.780491  809094 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0929 10:36:53.781884  809094 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0929 10:36:53.781909  809094 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0929 10:36:53.782011  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.783509  809094 addons.go:238] Setting addon default-storageclass=true in "addons-529420"
	I0929 10:36:53.783669  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.786763  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.807336  809094 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 10:36:53.807376  809094 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0929 10:36:53.807346  809094 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 10:36:53.807406  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0929 10:36:53.808907  809094 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0929 10:36:53.809965  809094 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-529420"
	I0929 10:36:53.810002  809094 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 10:36:53.810022  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 10:36:53.810026  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.810093  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.809085  809094 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0929 10:36:53.809123  809094 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0929 10:36:53.810358  809094 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0929 10:36:53.810445  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.810589  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.811657  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0929 10:36:53.811713  809094 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0929 10:36:53.811728  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0929 10:36:53.811729  809094 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 10:36:53.811785  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.812811  809094 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 10:36:53.812830  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0929 10:36:53.812883  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.813727  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0929 10:36:53.814898  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0929 10:36:53.816283  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0929 10:36:53.817528  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0929 10:36:53.818644  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0929 10:36:53.820451  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0929 10:36:53.820547  809094 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0929 10:36:53.820676  809094 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0929 10:36:53.821594  809094 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0929 10:36:53.821612  809094 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0929 10:36:53.821686  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.822262  809094 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 10:36:53.823314  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0929 10:36:53.822343  809094 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 10:36:53.823758  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0929 10:36:53.823888  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.824274  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.826866  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0929 10:36:53.828202  809094 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0929 10:36:53.828229  809094 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0929 10:36:53.828293  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.831468  809094 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0929 10:36:53.831558  809094 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0929 10:36:53.831581  809094 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I0929 10:36:53.831884  809094 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0929 10:36:53.835051  809094 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 10:36:53.835084  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0929 10:36:53.835153  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.836300  809094 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 10:36:53.836321  809094 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 10:36:53.836386  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.836883  809094 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I0929 10:36:53.837031  809094 out.go:179]   - Using image docker.io/registry:3.0.0
	I0929 10:36:53.838461  809094 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0929 10:36:53.838481  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0929 10:36:53.838563  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.839840  809094 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I0929 10:36:53.846729  809094 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0929 10:36:53.847497  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I0929 10:36:53.847543  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.847569  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.862719  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.868937  809094 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0929 10:36:53.870360  809094 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 10:36:53.870448  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0929 10:36:53.870559  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.876206  809094 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 10:36:53.876242  809094 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 10:36:53.879516  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.881669  809094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 10:36:53.897463  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.897986  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.902073  809094 out.go:179]   - Using image docker.io/busybox:stable
	I0929 10:36:53.907027  809094 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0929 10:36:53.908323  809094 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 10:36:53.908508  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0929 10:36:53.908601  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.912847  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.916620  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.916689  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.918495  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.924984  809094 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 10:36:53.927315  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.928318  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.929791  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.931522  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.931855  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.932316  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	W0929 10:36:53.933912  809094 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0929 10:36:53.933975  809094 retry.go:31] will retry after 222.37842ms: ssh: handshake failed: EOF
	I0929 10:36:53.947593  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.951956  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:54.017495  809094 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0929 10:36:54.017528  809094 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0929 10:36:54.037524  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0929 10:36:54.047797  809094 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0929 10:36:54.047824  809094 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0929 10:36:54.088850  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 10:36:54.102982  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 10:36:54.104728  809094 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0929 10:36:54.104759  809094 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0929 10:36:54.105824  809094 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:36:54.105847  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0929 10:36:54.113997  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 10:36:54.115206  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 10:36:54.115595  809094 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0929 10:36:54.115658  809094 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0929 10:36:54.115954  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 10:36:54.131289  809094 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0929 10:36:54.131324  809094 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0929 10:36:54.141402  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 10:36:54.141685  809094 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0929 10:36:54.141711  809094 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0929 10:36:54.146347  809094 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 10:36:54.146375  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0929 10:36:54.148830  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0929 10:36:54.158233  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 10:36:54.187390  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:36:54.199350  809094 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0929 10:36:54.199378  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0929 10:36:54.219151  809094 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0929 10:36:54.219235  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0929 10:36:54.228377  809094 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 10:36:54.228502  809094 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 10:36:54.236642  809094 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0929 10:36:54.236676  809094 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0929 10:36:54.249985  809094 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0929 10:36:54.250018  809094 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0929 10:36:54.257015  809094 node_ready.go:35] waiting up to 6m0s for node "addons-529420" to be "Ready" ...
	I0929 10:36:54.257465  809094 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0929 10:36:54.260181  809094 node_ready.go:49] node "addons-529420" is "Ready"
	I0929 10:36:54.260213  809094 node_ready.go:38] duration metric: took 3.156144ms for node "addons-529420" to be "Ready" ...
	I0929 10:36:54.260251  809094 api_server.go:52] waiting for apiserver process to appear ...
	I0929 10:36:54.260304  809094 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 10:36:54.275549  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0929 10:36:54.288410  809094 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 10:36:54.288527  809094 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 10:36:54.309547  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0929 10:36:54.326392  809094 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0929 10:36:54.326462  809094 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0929 10:36:54.371852  809094 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0929 10:36:54.371884  809094 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0929 10:36:54.385903  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 10:36:54.406122  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 10:36:54.441278  809094 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0929 10:36:54.441321  809094 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0929 10:36:54.475876  809094 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0929 10:36:54.475975  809094 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0929 10:36:54.558603  809094 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 10:36:54.558634  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0929 10:36:54.567254  809094 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0929 10:36:54.567363  809094 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0929 10:36:54.637129  809094 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0929 10:36:54.637160  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0929 10:36:54.681921  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 10:36:54.728318  809094 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0929 10:36:54.728360  809094 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0929 10:36:54.766660  809094 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-529420" context rescaled to 1 replicas
	I0929 10:36:54.848450  809094 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0929 10:36:54.848663  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0929 10:36:54.946361  809094 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0929 10:36:54.946523  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0929 10:36:55.079437  809094 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 10:36:55.079471  809094 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0929 10:36:55.190890  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 10:36:55.493984  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.390947237s)
	I0929 10:36:55.494004  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.379973258s)
	I0929 10:36:55.494048  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.378812825s)
	I0929 10:36:56.183759  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.042240387s)
	I0929 10:36:56.184086  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (2.068081762s)
	I0929 10:36:56.184109  809094 addons.go:479] Verifying addon ingress=true in "addons-529420"
	I0929 10:36:56.185647  809094 out.go:179] * Verifying ingress addon...
	I0929 10:36:56.187598  809094 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0929 10:36:56.207003  809094 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0929 10:36:56.207034  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:56.705051  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:57.193166  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:57.222996  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.064717663s)
	I0929 10:36:57.223366  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.035944416s)
	W0929 10:36:57.223400  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:36:57.223437  809094 retry.go:31] will retry after 152.765294ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:36:57.223474  809094 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.963156736s)
	I0929 10:36:57.223491  809094 api_server.go:72] duration metric: took 3.520164967s to wait for apiserver process to appear ...
	I0929 10:36:57.223498  809094 api_server.go:88] waiting for apiserver healthz status ...
	I0929 10:36:57.223518  809094 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0929 10:36:57.223916  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (2.948332459s)
	I0929 10:36:57.223942  809094 addons.go:479] Verifying addon registry=true in "addons-529420"
	I0929 10:36:57.224224  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (2.914632573s)
	I0929 10:36:57.224522  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.838576934s)
	I0929 10:36:57.224572  809094 addons.go:479] Verifying addon metrics-server=true in "addons-529420"
	I0929 10:36:57.224580  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.818313312s)
	I0929 10:36:57.224688  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.542731559s)
	W0929 10:36:57.224716  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 10:36:57.224732  809094 retry.go:31] will retry after 151.241621ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 10:36:57.224940  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.034013059s)
	I0929 10:36:57.224964  809094 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-529420"
	I0929 10:36:57.225804  809094 out.go:179] * Verifying registry addon...
	I0929 10:36:57.225829  809094 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-529420 service yakd-dashboard -n yakd-dashboard
	
	I0929 10:36:57.227287  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.078403021s)
	I0929 10:36:57.228280  809094 out.go:179] * Verifying csi-hostpath-driver addon...
	I0929 10:36:57.229531  809094 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0929 10:36:57.230385  809094 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0929 10:36:57.236158  809094 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0929 10:36:57.238077  809094 api_server.go:141] control plane version: v1.34.0
	I0929 10:36:57.238109  809094 api_server.go:131] duration metric: took 14.602902ms to wait for apiserver health ...
	I0929 10:36:57.238122  809094 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 10:36:57.295055  809094 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 10:36:57.295086  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:36:57.297613  809094 system_pods.go:59] 19 kube-system pods found
	I0929 10:36:57.297672  809094 system_pods.go:61] "amd-gpu-device-plugin-5pr2g" [d246c61e-5b06-4a8a-b834-a8104ed29092] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 10:36:57.297684  809094 system_pods.go:61] "coredns-66bc5c9577-5qvcx" [431a8d9d-251c-4916-9c89-87d909588d39] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:36:57.297700  809094 system_pods.go:61] "csi-hostpath-attacher-0" [70a20e1e-6207-4b51-afa5-e013bc35656f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 10:36:57.297708  809094 system_pods.go:61] "csi-hostpath-resizer-0" [9de479e4-bad8-4f15-bfd7-f7f8a2dd12e6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 10:36:57.297716  809094 system_pods.go:61] "csi-hostpathplugin-z7562" [bb4e1eda-850e-4da2-ae44-f84511d3ee67] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 10:36:57.297725  809094 system_pods.go:61] "etcd-addons-529420" [1b81bfdd-919b-4cd8-a60e-22f79112660f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 10:36:57.297730  809094 system_pods.go:61] "kube-apiserver-addons-529420" [47505dbb-1863-4367-98f4-9f7ebb2c9415] Running
	I0929 10:36:57.297746  809094 system_pods.go:61] "kube-controller-manager-addons-529420" [37496a5f-d392-4a23-b1a5-1c7a43d653a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 10:36:57.297755  809094 system_pods.go:61] "kube-ingress-dns-minikube" [a3dd66da-3ebf-43f5-80b9-e99fc50dfcb6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 10:36:57.297760  809094 system_pods.go:61] "kube-proxy-kxjqx" [99197e14-7d79-403b-87f4-b1f17f52f9f5] Running
	I0929 10:36:57.297767  809094 system_pods.go:61] "kube-scheduler-addons-529420" [ebe74bc8-ca56-4474-99cf-5b6633e8a102] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 10:36:57.297774  809094 system_pods.go:61] "metrics-server-85b7d694d7-6kvbb" [2261b64a-3e18-4fca-8fb4-4a870b5d302f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 10:36:57.297781  809094 system_pods.go:61] "nvidia-device-plugin-daemonset-rv5nz" [70340435-f458-4a2a-a617-185f04893887] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 10:36:57.297791  809094 system_pods.go:61] "registry-66898fdd98-9545s" [6b827e39-0a4c-4539-b408-9bfd1cdda6c1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 10:36:57.297799  809094 system_pods.go:61] "registry-creds-764b6fb674-ssj2n" [a5aecf48-b091-49c9-8743-a9d358c2fcc4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 10:36:57.297807  809094 system_pods.go:61] "registry-proxy-b2bpz" [87ba9cf0-8f09-4423-b674-f77637763bcc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 10:36:57.297814  809094 system_pods.go:61] "snapshot-controller-7d9fbc56b8-bxfh2" [bb19aa00-633c-4f38-88f2-3cd29546aec9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:36:57.297826  809094 system_pods.go:61] "snapshot-controller-7d9fbc56b8-pwx9p" [5203fa8d-6a4b-4bc9-89be-9a58d780e11e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:36:57.297834  809094 system_pods.go:61] "storage-provisioner" [c791ec78-6e64-4b97-a928-e3f1e19e92c1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 10:36:57.297843  809094 system_pods.go:74] duration metric: took 59.712387ms to wait for pod list to return data ...
	I0929 10:36:57.297854  809094 default_sa.go:34] waiting for default service account to be created ...
	I0929 10:36:57.298160  809094 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 10:36:57.298172  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:36:57.305651  809094 default_sa.go:45] found service account: "default"
	I0929 10:36:57.305752  809094 default_sa.go:55] duration metric: took 7.889394ms for default service account to be created ...
	I0929 10:36:57.305777  809094 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 10:36:57.314079  809094 system_pods.go:86] 19 kube-system pods found
	I0929 10:36:57.314192  809094 system_pods.go:89] "amd-gpu-device-plugin-5pr2g" [d246c61e-5b06-4a8a-b834-a8104ed29092] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 10:36:57.314243  809094 system_pods.go:89] "coredns-66bc5c9577-5qvcx" [431a8d9d-251c-4916-9c89-87d909588d39] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:36:57.314277  809094 system_pods.go:89] "csi-hostpath-attacher-0" [70a20e1e-6207-4b51-afa5-e013bc35656f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 10:36:57.314298  809094 system_pods.go:89] "csi-hostpath-resizer-0" [9de479e4-bad8-4f15-bfd7-f7f8a2dd12e6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 10:36:57.314318  809094 system_pods.go:89] "csi-hostpathplugin-z7562" [bb4e1eda-850e-4da2-ae44-f84511d3ee67] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 10:36:57.314350  809094 system_pods.go:89] "etcd-addons-529420" [1b81bfdd-919b-4cd8-a60e-22f79112660f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 10:36:57.314386  809094 system_pods.go:89] "kube-apiserver-addons-529420" [47505dbb-1863-4367-98f4-9f7ebb2c9415] Running
	I0929 10:36:57.314408  809094 system_pods.go:89] "kube-controller-manager-addons-529420" [37496a5f-d392-4a23-b1a5-1c7a43d653a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 10:36:57.314453  809094 system_pods.go:89] "kube-ingress-dns-minikube" [a3dd66da-3ebf-43f5-80b9-e99fc50dfcb6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 10:36:57.314472  809094 system_pods.go:89] "kube-proxy-kxjqx" [99197e14-7d79-403b-87f4-b1f17f52f9f5] Running
	I0929 10:36:57.314489  809094 system_pods.go:89] "kube-scheduler-addons-529420" [ebe74bc8-ca56-4474-99cf-5b6633e8a102] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 10:36:57.314521  809094 system_pods.go:89] "metrics-server-85b7d694d7-6kvbb" [2261b64a-3e18-4fca-8fb4-4a870b5d302f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 10:36:57.314541  809094 system_pods.go:89] "nvidia-device-plugin-daemonset-rv5nz" [70340435-f458-4a2a-a617-185f04893887] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 10:36:57.314558  809094 system_pods.go:89] "registry-66898fdd98-9545s" [6b827e39-0a4c-4539-b408-9bfd1cdda6c1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 10:36:57.314575  809094 system_pods.go:89] "registry-creds-764b6fb674-ssj2n" [a5aecf48-b091-49c9-8743-a9d358c2fcc4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 10:36:57.314623  809094 system_pods.go:89] "registry-proxy-b2bpz" [87ba9cf0-8f09-4423-b674-f77637763bcc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 10:36:57.314641  809094 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bxfh2" [bb19aa00-633c-4f38-88f2-3cd29546aec9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:36:57.314661  809094 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pwx9p" [5203fa8d-6a4b-4bc9-89be-9a58d780e11e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:36:57.314703  809094 system_pods.go:89] "storage-provisioner" [c791ec78-6e64-4b97-a928-e3f1e19e92c1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 10:36:57.314741  809094 system_pods.go:126] duration metric: took 8.926144ms to wait for k8s-apps to be running ...
	I0929 10:36:57.314762  809094 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 10:36:57.314849  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 10:36:57.336098  809094 system_svc.go:56] duration metric: took 21.32375ms WaitForService to wait for kubelet
	I0929 10:36:57.336133  809094 kubeadm.go:578] duration metric: took 3.632804576s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 10:36:57.336158  809094 node_conditions.go:102] verifying NodePressure condition ...
	I0929 10:36:57.344338  809094 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 10:36:57.344379  809094 node_conditions.go:123] node cpu capacity is 8
	I0929 10:36:57.344396  809094 node_conditions.go:105] duration metric: took 8.231792ms to run NodePressure ...
	I0929 10:36:57.344410  809094 start.go:241] waiting for startup goroutines ...
	I0929 10:36:57.376412  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 10:36:57.376539  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:36:57.692404  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:57.793184  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:36:57.793318  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:36:58.191147  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:58.233460  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:36:58.233602  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:36:58.692356  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:58.733694  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:36:58.733886  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:36:59.194666  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:59.293196  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:36:59.293196  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:36:59.692198  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:59.792340  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:36:59.792540  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:00.126856  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.750359151s)
	I0929 10:37:00.126994  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.750412046s)
	W0929 10:37:00.127048  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:00.127077  809094 retry.go:31] will retry after 243.584374ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:00.191955  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:00.292723  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:00.292816  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:00.371587  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:00.692541  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:00.733843  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:00.734315  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:01.147379  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:01.147434  809094 retry.go:31] will retry after 313.436894ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:01.192318  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:01.194800  809094 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0929 10:37:01.194881  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:37:01.217609  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:37:01.269785  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:01.269897  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:01.337246  809094 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0929 10:37:01.363074  809094 addons.go:238] Setting addon gcp-auth=true in "addons-529420"
	I0929 10:37:01.363143  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:37:01.363713  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:37:01.386898  809094 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0929 10:37:01.386956  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:37:01.409699  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:37:01.461472  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:01.691916  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:01.733454  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:01.734086  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:02.191808  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 10:37:02.208498  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:02.208536  809094 retry.go:31] will retry after 1.150273014s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:02.211872  809094 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 10:37:02.213574  809094 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0929 10:37:02.214876  809094 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0929 10:37:02.214902  809094 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0929 10:37:02.234020  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:02.234162  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:02.241298  809094 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0929 10:37:02.241331  809094 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0929 10:37:02.266924  809094 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 10:37:02.266946  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0929 10:37:02.293874  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 10:37:02.684619  809094 addons.go:479] Verifying addon gcp-auth=true in "addons-529420"
	I0929 10:37:02.685986  809094 out.go:179] * Verifying gcp-auth addon...
	I0929 10:37:02.688185  809094 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0929 10:37:02.693394  809094 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0929 10:37:02.693428  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:02.693405  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:02.733763  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:02.733885  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:03.192364  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:03.192855  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:03.234316  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:03.234465  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:03.359547  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:03.692021  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:03.692117  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:03.733209  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:03.734038  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:04.191279  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:04.191358  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:04.233692  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:04.234492  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:04.287957  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:04.287997  809094 retry.go:31] will retry after 1.511178059s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:04.692167  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:04.692234  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:04.733131  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:04.734062  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:05.191715  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:05.191921  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:05.233860  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:05.234129  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:05.691361  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:05.691521  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:05.734135  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:05.734258  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:05.800129  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:06.191447  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:06.191552  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:06.233874  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:06.234076  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:06.411195  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:06.411250  809094 retry.go:31] will retry after 2.100064857s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:06.691798  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:06.692073  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:06.732827  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:06.733809  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:07.191508  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:07.191709  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:07.292363  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:07.292457  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:07.691546  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:07.691574  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:07.734030  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:07.734125  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:08.191781  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:08.191814  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:08.232967  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:08.233754  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:08.512112  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:08.691009  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:08.691510  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:08.733861  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:08.733938  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:09.104861  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:09.104897  809094 retry.go:31] will retry after 3.376157006s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:09.191367  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:09.191595  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:09.233494  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:09.233556  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:09.691727  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:09.691798  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:09.733640  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:09.733721  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:10.190922  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:10.191004  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:10.233082  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:10.234008  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:10.691979  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:10.692058  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:10.733220  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:10.733906  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:11.191409  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:11.191511  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:11.234068  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:11.234155  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:11.691573  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:11.691635  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:11.732866  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:11.733594  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:12.191979  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:12.192191  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:12.232888  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:12.233877  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:12.481250  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:12.691033  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:12.691374  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:12.733300  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:12.733527  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:13.054889  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:13.054935  809094 retry.go:31] will retry after 5.523481945s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:13.191285  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:13.191341  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:13.233717  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:13.233802  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:13.691877  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:13.691923  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:13.732931  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:13.733745  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:14.191616  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:14.191752  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:14.233768  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:14.233879  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:14.691856  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:14.691909  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:14.733917  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:14.733984  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:15.191840  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:15.191956  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:15.232857  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:15.233963  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:15.691580  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:15.691731  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:15.733913  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:15.735663  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:16.191071  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:16.191303  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:16.233820  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:16.233918  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:16.691314  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:16.691386  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:16.734056  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:16.734170  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:17.191402  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:17.191593  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:17.233380  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:17.233625  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:17.691493  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:17.691699  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:17.732648  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:17.733372  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:18.191911  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:18.192046  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:18.233634  809094 kapi.go:107] duration metric: took 21.004035787s to wait for kubernetes.io/minikube-addons=registry ...
	I0929 10:37:18.234214  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:18.578952  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:18.692732  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:18.693005  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:18.734571  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:19.192457  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:19.192698  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:19.236192  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:19.359657  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:19.359698  809094 retry.go:31] will retry after 5.374303415s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:19.691689  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:19.691870  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:19.734388  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:20.192362  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:20.192436  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:20.235035  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:20.691493  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:20.691559  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:20.734068  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:21.191686  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:21.191754  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:21.234043  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:21.691673  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:21.691816  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:21.734360  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:22.192209  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:22.192364  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:22.234765  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:22.691170  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:22.691296  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:22.734585  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:23.191838  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:23.191855  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:23.234192  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:23.691617  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:23.691827  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:23.734342  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:24.191851  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:24.191914  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:24.234052  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:24.691665  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:24.691766  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:24.734744  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:24.778008  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:25.191224  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:25.191245  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:25.233932  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:25.323222  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:25.323263  809094 retry.go:31] will retry after 4.946418269s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:25.692211  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:25.692261  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:25.793190  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:26.191796  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:26.192044  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:26.234744  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:26.691669  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:26.691730  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:26.792265  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:27.191948  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:27.192054  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:27.293582  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:27.691274  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:27.691324  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:27.734583  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:28.191896  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:28.192035  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:28.234579  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:28.691393  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:28.691468  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:28.745258  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:29.191857  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:29.192145  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:29.293661  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:29.692369  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:29.692409  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:29.734847  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:30.191284  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:30.191549  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:30.235247  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:30.270015  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:30.692450  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:30.692468  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:30.734382  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:30.874687  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:30.874717  809094 retry.go:31] will retry after 9.611404012s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:31.191034  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:31.191136  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:31.234596  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:31.691789  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:31.691851  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:31.734487  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:32.192228  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:32.192397  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:32.234895  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:32.691647  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:32.691826  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:32.734231  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:33.191791  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:33.191803  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:33.293681  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:33.691499  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:33.691575  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:33.733652  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:34.191347  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:34.191394  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:34.235018  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:34.692239  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:34.692288  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:34.734356  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:35.191945  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:35.192042  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:35.233802  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:35.691543  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:35.691708  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:35.733748  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:36.190728  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:36.190927  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:36.234247  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:36.691843  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:36.691958  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:36.734324  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:37.232504  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:37.232580  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:37.234832  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:37.691206  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:37.691326  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:37.734619  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:38.191595  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:38.191680  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:38.292586  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:38.692258  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:38.692407  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:38.734713  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:39.191465  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:39.191541  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:39.233459  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:39.692264  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:39.692361  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:39.736727  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:40.191041  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:40.191241  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:40.282297  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:40.487332  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:40.700928  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:40.700982  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:40.734342  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:41.111058  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:41.111096  809094 retry.go:31] will retry after 27.798318096s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:41.192192  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:41.192448  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:41.234734  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:41.691467  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:41.691559  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:41.738052  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:42.191281  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:42.191334  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:42.234363  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:42.692206  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:42.692364  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:42.735143  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:43.192059  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:43.192131  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:43.234581  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:43.691539  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:43.691592  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:43.734168  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:44.192399  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:44.192554  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:44.235151  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:44.712201  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:44.712256  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:44.814275  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:45.191818  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:45.193367  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:45.292640  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:45.692318  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:45.692523  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:45.734670  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:46.191229  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:46.191310  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:46.234266  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:46.691380  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:46.691528  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:46.735585  809094 kapi.go:107] duration metric: took 49.505194902s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0929 10:37:47.191025  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:47.191255  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:47.691635  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:47.691700  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:48.191562  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:48.191703  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:48.692227  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:48.692295  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:49.191449  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:49.191458  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:49.691391  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:49.691392  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:50.191310  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:50.191376  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:50.691312  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:50.691530  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:51.190897  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:51.191071  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:51.690962  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:51.690975  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:52.191596  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:52.191799  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:52.691821  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:52.691860  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:53.192013  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:53.192203  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:53.690815  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:53.691412  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:54.191531  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:54.191563  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:54.690735  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:54.691192  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:55.191955  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:55.192190  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:55.691501  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:55.691567  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:56.191135  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:56.191259  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:56.691523  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:56.691569  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:57.190892  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:57.191196  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:57.691389  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:57.691460  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:58.191486  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:58.191637  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:58.691468  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:58.691615  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:59.191501  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:59.191524  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:59.691453  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:59.691624  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:00.191269  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:00.191495  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:00.691597  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:00.691645  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:01.192840  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:01.193251  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:01.692293  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:01.692359  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:02.191486  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:02.191506  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:02.691685  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:02.691815  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:03.191877  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:03.191952  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:03.692663  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:03.692768  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:04.191650  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:04.191736  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:04.691141  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:04.691354  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:05.191572  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:05.191608  809094 kapi.go:107] duration metric: took 1m9.004009852s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0929 10:38:05.692502  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:06.192606  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:06.692131  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:07.191846  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:07.691923  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:08.192050  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:08.691556  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:08.910309  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:38:09.192274  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 10:38:09.505831  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:38:09.505869  809094 retry.go:31] will retry after 26.574045109s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:38:09.692484  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:10.191984  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:10.692204  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:11.191688  809094 kapi.go:107] duration metric: took 1m8.503599106s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0929 10:38:11.193489  809094 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-529420 cluster.
	I0929 10:38:11.194659  809094 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0929 10:38:11.195892  809094 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0929 10:38:36.081555  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0929 10:38:36.665106  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 10:38:36.665238  809094 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0929 10:38:36.667274  809094 out.go:179] * Enabled addons: amd-gpu-device-plugin, cloud-spanner, storage-provisioner, nvidia-device-plugin, default-storageclass, registry-creds, metrics-server, ingress-dns, volcano, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0929 10:38:36.668228  809094 addons.go:514] duration metric: took 1m42.964873486s for enable addons: enabled=[amd-gpu-device-plugin cloud-spanner storage-provisioner nvidia-device-plugin default-storageclass registry-creds metrics-server ingress-dns volcano yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0929 10:38:36.668278  809094 start.go:246] waiting for cluster config update ...
	I0929 10:38:36.668307  809094 start.go:255] writing updated cluster config ...
	I0929 10:38:36.668641  809094 ssh_runner.go:195] Run: rm -f paused
	I0929 10:38:36.672803  809094 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 10:38:36.676997  809094 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5qvcx" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:36.681719  809094 pod_ready.go:94] pod "coredns-66bc5c9577-5qvcx" is "Ready"
	I0929 10:38:36.681743  809094 pod_ready.go:86] duration metric: took 4.720124ms for pod "coredns-66bc5c9577-5qvcx" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:36.684044  809094 pod_ready.go:83] waiting for pod "etcd-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:36.688395  809094 pod_ready.go:94] pod "etcd-addons-529420" is "Ready"
	I0929 10:38:36.688443  809094 pod_ready.go:86] duration metric: took 4.375753ms for pod "etcd-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:36.690812  809094 pod_ready.go:83] waiting for pod "kube-apiserver-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:36.694897  809094 pod_ready.go:94] pod "kube-apiserver-addons-529420" is "Ready"
	I0929 10:38:36.694938  809094 pod_ready.go:86] duration metric: took 4.098109ms for pod "kube-apiserver-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:36.696911  809094 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:37.077390  809094 pod_ready.go:94] pod "kube-controller-manager-addons-529420" is "Ready"
	I0929 10:38:37.077434  809094 pod_ready.go:86] duration metric: took 380.496118ms for pod "kube-controller-manager-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:37.276699  809094 pod_ready.go:83] waiting for pod "kube-proxy-kxjqx" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:37.677151  809094 pod_ready.go:94] pod "kube-proxy-kxjqx" is "Ready"
	I0929 10:38:37.677180  809094 pod_ready.go:86] duration metric: took 400.453557ms for pod "kube-proxy-kxjqx" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:37.877980  809094 pod_ready.go:83] waiting for pod "kube-scheduler-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:38.277511  809094 pod_ready.go:94] pod "kube-scheduler-addons-529420" is "Ready"
	I0929 10:38:38.277542  809094 pod_ready.go:86] duration metric: took 399.519909ms for pod "kube-scheduler-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:38.277554  809094 pod_ready.go:40] duration metric: took 1.604719848s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 10:38:38.326458  809094 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 10:38:38.327948  809094 out.go:179] * Done! kubectl is now configured to use "addons-529420" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 29 10:46:16 addons-529420 dockerd[1117]: time="2025-09-29T10:46:16.420811680Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:46:18 addons-529420 dockerd[1117]: time="2025-09-29T10:46:18.699889263Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:46:59 addons-529420 dockerd[1117]: time="2025-09-29T10:46:59.700640420Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:47:04 addons-529420 dockerd[1117]: time="2025-09-29T10:47:04.693768710Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:47:05 addons-529420 dockerd[1117]: time="2025-09-29T10:47:05.952262466Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 10:47:06 addons-529420 dockerd[1117]: time="2025-09-29T10:47:06.425910889Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:47:24 addons-529420 dockerd[1117]: time="2025-09-29T10:47:24.619747458Z" level=info msg="ignoring event" container=052a287bdf089279ed4d96d9e74eb833343c91cb2fa0c1077c95c76e40b2dde4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 10:47:30 addons-529420 dockerd[1117]: time="2025-09-29T10:47:30.947631683Z" level=info msg="ignoring event" container=bb3744a5c9b15698eedec6834da8902ece3e2b85e6b68c9a6ae4ec8c1365229e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 10:47:46 addons-529420 cri-dockerd[1422]: time="2025-09-29T10:47:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/620108435d4e5761a7acb4f1738f16f2bcbfd8e87440bfa8f34fab96ea08a68e/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local local europe-west2-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 29 10:47:46 addons-529420 dockerd[1117]: time="2025-09-29T10:47:46.602200352Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 10:47:47 addons-529420 dockerd[1117]: time="2025-09-29T10:47:47.358254358Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:47:47 addons-529420 cri-dockerd[1422]: time="2025-09-29T10:47:47Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Pulling from library/busybox"
	Sep 29 10:48:02 addons-529420 dockerd[1117]: time="2025-09-29T10:48:02.952724746Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 10:48:03 addons-529420 dockerd[1117]: time="2025-09-29T10:48:03.429397762Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:48:26 addons-529420 dockerd[1117]: time="2025-09-29T10:48:26.953326476Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 10:48:27 addons-529420 dockerd[1117]: time="2025-09-29T10:48:27.432384059Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:48:33 addons-529420 dockerd[1117]: time="2025-09-29T10:48:33.698522133Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:48:36 addons-529420 dockerd[1117]: time="2025-09-29T10:48:36.701764564Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:49:07 addons-529420 dockerd[1117]: time="2025-09-29T10:49:07.954086181Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 10:49:08 addons-529420 dockerd[1117]: time="2025-09-29T10:49:08.436381569Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:49:46 addons-529420 dockerd[1117]: time="2025-09-29T10:49:46.444457226Z" level=info msg="ignoring event" container=620108435d4e5761a7acb4f1738f16f2bcbfd8e87440bfa8f34fab96ea08a68e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 10:50:16 addons-529420 cri-dockerd[1422]: time="2025-09-29T10:50:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3ace35db5b46186d22767674f314b728476988ef5b2a7e9cc6363dce26a49047/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local local europe-west2-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 29 10:50:17 addons-529420 dockerd[1117]: time="2025-09-29T10:50:17.071522409Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 10:50:17 addons-529420 dockerd[1117]: time="2025-09-29T10:50:17.832844388Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:50:17 addons-529420 cri-dockerd[1422]: time="2025-09-29T10:50:17Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Pulling from library/busybox"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	c663ddd040c70       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          5 minutes ago       Running             busybox                                  0                   75cd2cc2ccb54       busybox
	53f450233c951       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef                             12 minutes ago      Running             controller                               0                   7e7e5a36e33c6       ingress-nginx-controller-9cc49f96f-pxng2
	b8bc330ded02a       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          12 minutes ago      Running             csi-snapshotter                          0                   ebcc1d0ccdc31       csi-hostpathplugin-z7562
	097ec25364497       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          12 minutes ago      Running             csi-provisioner                          0                   ebcc1d0ccdc31       csi-hostpathplugin-z7562
	42c80806f3442       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            12 minutes ago      Running             liveness-probe                           0                   ebcc1d0ccdc31       csi-hostpathplugin-z7562
	deb6678813fad       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           12 minutes ago      Running             hostpath                                 0                   ebcc1d0ccdc31       csi-hostpathplugin-z7562
	4c9a460366a7b       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                12 minutes ago      Running             node-driver-registrar                    0                   ebcc1d0ccdc31       csi-hostpathplugin-z7562
	8ea1f8a967cfc       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   12 minutes ago      Running             csi-external-health-monitor-controller   0                   ebcc1d0ccdc31       csi-hostpathplugin-z7562
	8c797f52e7973       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              12 minutes ago      Running             csi-resizer                              0                   f5028bed08a36       csi-hostpath-resizer-0
	7ed61e7756a57       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             12 minutes ago      Running             csi-attacher                             0                   80edbd2ea7f0f       csi-hostpath-attacher-0
	ac3df2986f815       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      12 minutes ago      Running             volume-snapshot-controller               0                   2f35c5222a676       snapshot-controller-7d9fbc56b8-pwx9p
	cb2fd8f092644       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      12 minutes ago      Running             volume-snapshot-controller               0                   ba080eb06cf32       snapshot-controller-7d9fbc56b8-bxfh2
	2575c5650e053       8c217da6734db                                                                                                                                12 minutes ago      Exited              patch                                    1                   8b7ddb7c14d4d       ingress-nginx-admission-patch-ncp6w
	788b28f41ae4a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   12 minutes ago      Exited              create                                   0                   fe7563f8d917a       ingress-nginx-admission-create-tj9td
	866b95aa367a6       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5                            12 minutes ago      Running             gadget                                   0                   7ee25e2189efc       gadget-8bvtc
	7292e19e493f5       6e38f40d628db                                                                                                                                13 minutes ago      Running             storage-provisioner                      1                   1360822b0ef5c       storage-provisioner
	8cc199db322c9       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       13 minutes ago      Running             local-path-provisioner                   0                   aae77681f23ba       local-path-provisioner-648f6765c9-fg6hp
	88b9438083846       kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89                                         13 minutes ago      Running             minikube-ingress-dns                     0                   33ea5a9962762       kube-ingress-dns-minikube
	910e07bc5455e       6e38f40d628db                                                                                                                                13 minutes ago      Exited              storage-provisioner                      0                   1360822b0ef5c       storage-provisioner
	c491d217e6a98       52546a367cc9e                                                                                                                                13 minutes ago      Running             coredns                                  0                   5f95340f57eb0       coredns-66bc5c9577-5qvcx
	7f229dd48a2c3       df0860106674d                                                                                                                                13 minutes ago      Running             kube-proxy                               0                   6014edb3b0f91       kube-proxy-kxjqx
	cc27cd95c2f3e       90550c43ad2bc                                                                                                                                13 minutes ago      Running             kube-apiserver                           0                   2e57097224cc0       kube-apiserver-addons-529420
	ea3b2c765f2ae       a0af72f2ec6d6                                                                                                                                13 minutes ago      Running             kube-controller-manager                  0                   ce79a900c656a       kube-controller-manager-addons-529420
	7f38046322a2d       46169d968e920                                                                                                                                13 minutes ago      Running             kube-scheduler                           0                   74b0641932447       kube-scheduler-addons-529420
	3a3152518f29b       5f1f5298c888d                                                                                                                                13 minutes ago      Running             etcd                                     0                   099566320ee0a       etcd-addons-529420
	
	
	==> controller_ingress [53f450233c95] <==
	I0929 10:38:04.686885       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"7d8260c8-2dc2-42de-8475-3174f0fbcebe", APIVersion:"v1", ResourceVersion:"632", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0929 10:38:05.882258       7 nginx.go:319] "Starting NGINX process"
	I0929 10:38:05.882452       7 leaderelection.go:257] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0929 10:38:05.882700       7 nginx.go:339] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0929 10:38:05.883222       7 controller.go:214] "Configuration changes detected, backend reload required"
	I0929 10:38:05.888729       7 leaderelection.go:271] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0929 10:38:05.888761       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-9cc49f96f-pxng2"
	I0929 10:38:05.892640       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-9cc49f96f-pxng2" node="addons-529420"
	I0929 10:38:05.926797       7 controller.go:228] "Backend successfully reloaded"
	I0929 10:38:05.926911       7 controller.go:240] "Initial sync, sleeping for 1 second"
	I0929 10:38:05.926976       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-9cc49f96f-pxng2", UID:"587b0efc-8a0d-48e9-8f76-f6c72ed2a5f2", APIVersion:"v1", ResourceVersion:"704", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I0929 10:38:05.993545       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-9cc49f96f-pxng2" node="addons-529420"
	W0929 10:45:37.492997       7 controller.go:1126] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0929 10:45:37.494693       7 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I0929 10:45:37.498137       7 store.go:443] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	I0929 10:45:37.498313       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"1675f37f-1842-4388-a104-89352b87f57a", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2589", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0929 10:45:40.520096       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I0929 10:45:40.520746       7 controller.go:214] "Configuration changes detected, backend reload required"
	I0929 10:45:40.560990       7 controller.go:228] "Backend successfully reloaded"
	I0929 10:45:40.561244       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-9cc49f96f-pxng2", UID:"587b0efc-8a0d-48e9-8f76-f6c72ed2a5f2", APIVersion:"v1", ResourceVersion:"704", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0929 10:45:43.853384       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I0929 10:46:05.896671       7 status.go:309] "updating Ingress status" namespace="default" ingress="nginx-ingress" currentValue=null newValue=[{"ip":"192.168.49.2"}]
	I0929 10:46:05.900481       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"1675f37f-1842-4388-a104-89352b87f57a", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2672", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0929 10:46:05.900661       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W0929 10:47:24.477409       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	
	
	==> coredns [c491d217e6a9] <==
	[INFO] 10.244.0.7:50415 - 40142 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000140179s
	[INFO] 10.244.0.7:51631 - 16547 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000077812s
	[INFO] 10.244.0.7:51631 - 16342 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000121622s
	[INFO] 10.244.0.7:54120 - 55745 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000074894s
	[INFO] 10.244.0.7:54120 - 55427 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.00012144s
	[INFO] 10.244.0.7:44276 - 57260 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000156752s
	[INFO] 10.244.0.7:44276 - 57057 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00019348s
	[INFO] 10.244.0.25:60127 - 41853 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000357536s
	[INFO] 10.244.0.25:49489 - 60455 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000420887s
	[INFO] 10.244.0.25:56444 - 64863 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000123075s
	[INFO] 10.244.0.25:47991 - 45219 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000219459s
	[INFO] 10.244.0.25:41073 - 37798 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000132747s
	[INFO] 10.244.0.25:56023 - 28022 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00018111s
	[INFO] 10.244.0.25:34207 - 60772 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003985247s
	[INFO] 10.244.0.25:34528 - 32480 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.00679976s
	[INFO] 10.244.0.25:57103 - 8380 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.006696106s
	[INFO] 10.244.0.25:58770 - 56546 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.011895734s
	[INFO] 10.244.0.25:46155 - 33827 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004624849s
	[INFO] 10.244.0.25:33222 - 31647 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.008860505s
	[INFO] 10.244.0.25:50865 - 49473 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.0056946s
	[INFO] 10.244.0.25:47833 - 61181 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006586754s
	[INFO] 10.244.0.25:57597 - 20971 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 344 0.000971648s
	[INFO] 10.244.0.25:51886 - 57600 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00104711s
	[INFO] 10.244.0.28:43176 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000337648s
	[INFO] 10.244.0.28:43537 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000209032s
	
	
	==> describe nodes <==
	Name:               addons-529420
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-529420
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c1f958e1d15faaa2b94ae7399d1155627e45fcf8
	                    minikube.k8s.io/name=addons-529420
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T10_36_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-529420
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-529420"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 10:36:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-529420
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 10:50:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 10:49:02 +0000   Mon, 29 Sep 2025 10:36:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 10:49:02 +0000   Mon, 29 Sep 2025 10:36:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 10:49:02 +0000   Mon, 29 Sep 2025 10:36:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 10:49:02 +0000   Mon, 29 Sep 2025 10:36:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-529420
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 3a82221623174cb89aca2e3330c89f2e
	  System UUID:                a576baa7-93e3-43ba-ad25-9862293a3344
	  Boot ID:                    2fed06dd-7ca1-49fc-8cb4-296c71642eea
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m39s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	  gadget                      gadget-8bvtc                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-pxng2                      100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         13m
	  kube-system                 coredns-66bc5c9577-5qvcx                                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     13m
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 csi-hostpathplugin-z7562                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-addons-529420                                            100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         13m
	  kube-system                 kube-apiserver-addons-529420                                  250m (3%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-addons-529420                         200m (2%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-kxjqx                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-addons-529420                                  100m (1%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 snapshot-controller-7d9fbc56b8-bxfh2                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 snapshot-controller-7d9fbc56b8-pwx9p                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  local-path-storage          helper-pod-create-pvc-dc48f0d2-8814-47d1-a731-5ee587529e41    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  local-path-storage          local-path-provisioner-648f6765c9-fg6hp                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             260Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node addons-529420 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node addons-529420 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node addons-529420 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node addons-529420 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node addons-529420 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node addons-529420 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node addons-529420 event: Registered Node addons-529420 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 60 55 0a b3 db 08 06
	[  +0.024494] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 22 ec 8f 40 19 66 08 06
	[  +2.802739] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 72 ba 15 15 c7 ec 08 06
	[  +0.014450] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 9a 38 c6 be 73 e8 08 06
	[  +2.936645] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 02 df 88 08 78 89 08 06
	[  +0.326891] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 b3 ea 3d 15 d3 08 06
	[  +0.292071] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 79 3e 6e 1c 85 08 06
	[Sep29 10:38] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 fd 85 4d 34 7c 08 06
	[  +6.578912] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0a 8d 25 f9 d3 f7 08 06
	[  +0.000514] IPv4: martian source 10.244.0.25 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 79 85 72 eb a1 08 06
	[Sep29 10:45] IPv4: martian source 10.244.0.1 from 10.244.0.28, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 20 b3 46 3c a7 08 06
	[  +0.000504] IPv4: martian source 10.244.0.28 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 79 85 72 eb a1 08 06
	[  +0.000784] IPv4: martian source 10.244.0.28 from 10.244.0.6, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 86 48 7b 6f c2 51 08 06
	
	
	==> etcd [3a3152518f29] <==
	{"level":"warn","ts":"2025-09-29T10:36:45.881854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:45.888653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:45.902856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:45.911144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:45.918112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:45.975495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:57.777500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36938","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T10:37:03.895377Z","caller":"traceutil/trace.go:172","msg":"trace[858034751] transaction","detail":"{read_only:false; response_revision:1015; number_of_response:1; }","duration":"128.948339ms","start":"2025-09-29T10:37:03.766408Z","end":"2025-09-29T10:37:03.895357Z","steps":["trace[858034751] 'process raft request'  (duration: 128.809516ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:37:04.039951Z","caller":"traceutil/trace.go:172","msg":"trace[799810606] transaction","detail":"{read_only:false; response_revision:1016; number_of_response:1; }","duration":"139.742546ms","start":"2025-09-29T10:37:03.900189Z","end":"2025-09-29T10:37:04.039932Z","steps":["trace[799810606] 'process raft request'  (duration: 139.624536ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T10:37:23.432233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.465158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.491685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.505493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.513523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.523591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.578347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.609684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.635915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.643629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.654366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.666394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55308","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T10:38:03.955142Z","caller":"traceutil/trace.go:172","msg":"trace[1495019545] transaction","detail":"{read_only:false; response_revision:1324; number_of_response:1; }","duration":"128.921298ms","start":"2025-09-29T10:38:03.826200Z","end":"2025-09-29T10:38:03.955121Z","steps":["trace[1495019545] 'process raft request'  (duration: 102.9549ms)","trace[1495019545] 'compare'  (duration: 25.853332ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-29T10:46:45.426042Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1847}
	{"level":"info","ts":"2025-09-29T10:46:45.453266Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1847,"took":"26.541323ms","hash":3270274898,"current-db-size-bytes":10637312,"current-db-size":"11 MB","current-db-size-in-use-bytes":6828032,"current-db-size-in-use":"6.8 MB"}
	{"level":"info","ts":"2025-09-29T10:46:45.453313Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3270274898,"revision":1847,"compact-revision":-1}
	
	
	==> kernel <==
	 10:50:31 up  2:32,  0 users,  load average: 0.15, 0.32, 0.68
	Linux addons-529420 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [cc27cd95c2f3] <==
	W0929 10:44:42.777299       1 cacher.go:182] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0929 10:44:42.793473       1 cacher.go:182] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0929 10:44:42.800545       1 cacher.go:182] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0929 10:44:42.861870       1 cacher.go:182] Terminating all watchers from cacher hypernodes.topology.volcano.sh
	W0929 10:44:42.888433       1 cacher.go:182] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0929 10:44:42.939793       1 cacher.go:182] Terminating all watchers from cacher cronjobs.batch.volcano.sh
	W0929 10:44:43.206461       1 cacher.go:182] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	W0929 10:44:43.309112       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	I0929 10:44:45.905061       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0929 10:45:02.075817       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47796: use of closed network connection
	E0929 10:45:02.267143       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47826: use of closed network connection
	I0929 10:45:11.278756       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.35.49"}
	I0929 10:45:32.176305       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:45:37.495617       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0929 10:45:37.674939       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.237.222"}
	I0929 10:45:46.221514       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0929 10:46:01.566752       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:46:34.023075       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:46:46.423056       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 10:47:12.562468       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:47:52.427235       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:48:29.056133       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:49:06.267590       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:49:32.974902       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:50:24.249376       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [ea3b2c765f2a] <==
	E0929 10:49:23.962376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:49:31.691354       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:49:31.692460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:49:32.439486       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:49:32.440990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:49:32.499910       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:49:32.501106       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:49:59.645793       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:49:59.646925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:50:00.724202       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:50:00.725311       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:50:01.764504       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:50:01.765690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:50:07.439509       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:50:07.440587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:50:07.568358       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:50:07.569464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:50:13.215688       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:50:13.216888       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:50:14.350621       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:50:14.351750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:50:28.643651       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:50:28.644758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:50:29.722403       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:50:29.723436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [7f229dd48a2c] <==
	I0929 10:36:55.962965       1 server_linux.go:53] "Using iptables proxy"
	I0929 10:36:56.114754       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 10:36:56.216582       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 10:36:56.217225       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 10:36:56.218058       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 10:36:56.276653       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 10:36:56.277056       1 server_linux.go:132] "Using iptables Proxier"
	I0929 10:36:56.332280       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 10:36:56.332751       1 server.go:527] "Version info" version="v1.34.0"
	I0929 10:36:56.332790       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:36:56.334977       1 config.go:200] "Starting service config controller"
	I0929 10:36:56.335003       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 10:36:56.335633       1 config.go:309] "Starting node config controller"
	I0929 10:36:56.335645       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 10:36:56.335652       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 10:36:56.335877       1 config.go:106] "Starting endpoint slice config controller"
	I0929 10:36:56.335887       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 10:36:56.335924       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 10:36:56.335932       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 10:36:56.435153       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 10:36:56.436458       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 10:36:56.436501       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7f38046322a2] <==
	I0929 10:36:46.773756       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0929 10:36:46.774579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 10:36:46.774618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0929 10:36:46.774676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 10:36:46.775018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 10:36:46.775287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 10:36:46.775438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 10:36:46.775611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 10:36:46.775732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 10:36:46.775732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 10:36:46.775766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 10:36:46.776166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 10:36:46.776381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 10:36:46.776394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 10:36:46.776523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 10:36:46.776523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 10:36:46.776556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 10:36:46.776598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 10:36:46.776614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 10:36:46.776631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 10:36:47.607010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 10:36:47.607036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 10:36:47.611460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 10:36:47.673666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I0929 10:36:47.973483       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 10:49:46 addons-529420 kubelet[2313]: I0929 10:49:46.598886    2313 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/4c1584ac-be4e-440a-a07d-b085835a2b99-script\") pod \"4c1584ac-be4e-440a-a07d-b085835a2b99\" (UID: \"4c1584ac-be4e-440a-a07d-b085835a2b99\") "
	Sep 29 10:49:46 addons-529420 kubelet[2313]: I0929 10:49:46.598925    2313 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2b2h\" (UniqueName: \"kubernetes.io/projected/4c1584ac-be4e-440a-a07d-b085835a2b99-kube-api-access-q2b2h\") pod \"4c1584ac-be4e-440a-a07d-b085835a2b99\" (UID: \"4c1584ac-be4e-440a-a07d-b085835a2b99\") "
	Sep 29 10:49:46 addons-529420 kubelet[2313]: I0929 10:49:46.598973    2313 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c1584ac-be4e-440a-a07d-b085835a2b99-data" (OuterVolumeSpecName: "data") pod "4c1584ac-be4e-440a-a07d-b085835a2b99" (UID: "4c1584ac-be4e-440a-a07d-b085835a2b99"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Sep 29 10:49:46 addons-529420 kubelet[2313]: I0929 10:49:46.599056    2313 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/4c1584ac-be4e-440a-a07d-b085835a2b99-data\") on node \"addons-529420\" DevicePath \"\""
	Sep 29 10:49:46 addons-529420 kubelet[2313]: I0929 10:49:46.599317    2313 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c1584ac-be4e-440a-a07d-b085835a2b99-script" (OuterVolumeSpecName: "script") pod "4c1584ac-be4e-440a-a07d-b085835a2b99" (UID: "4c1584ac-be4e-440a-a07d-b085835a2b99"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Sep 29 10:49:46 addons-529420 kubelet[2313]: I0929 10:49:46.601222    2313 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c1584ac-be4e-440a-a07d-b085835a2b99-kube-api-access-q2b2h" (OuterVolumeSpecName: "kube-api-access-q2b2h") pod "4c1584ac-be4e-440a-a07d-b085835a2b99" (UID: "4c1584ac-be4e-440a-a07d-b085835a2b99"). InnerVolumeSpecName "kube-api-access-q2b2h". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Sep 29 10:49:46 addons-529420 kubelet[2313]: I0929 10:49:46.700188    2313 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/4c1584ac-be4e-440a-a07d-b085835a2b99-script\") on node \"addons-529420\" DevicePath \"\""
	Sep 29 10:49:46 addons-529420 kubelet[2313]: I0929 10:49:46.700226    2313 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q2b2h\" (UniqueName: \"kubernetes.io/projected/4c1584ac-be4e-440a-a07d-b085835a2b99-kube-api-access-q2b2h\") on node \"addons-529420\" DevicePath \"\""
	Sep 29 10:49:48 addons-529420 kubelet[2313]: I0929 10:49:48.724855    2313 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c1584ac-be4e-440a-a07d-b085835a2b99" path="/var/lib/kubelet/pods/4c1584ac-be4e-440a-a07d-b085835a2b99/volumes"
	Sep 29 10:49:51 addons-529420 kubelet[2313]: E0929 10:49:51.715228    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="f831007e-4eb0-4f0f-b585-d846fca29f0a"
	Sep 29 10:49:57 addons-529420 kubelet[2313]: E0929 10:49:57.712709    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="f53396f1-c1a5-46b0-8089-67b5c438ba7b"
	Sep 29 10:50:03 addons-529420 kubelet[2313]: E0929 10:50:03.715173    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="f831007e-4eb0-4f0f-b585-d846fca29f0a"
	Sep 29 10:50:09 addons-529420 kubelet[2313]: E0929 10:50:09.713847    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="f53396f1-c1a5-46b0-8089-67b5c438ba7b"
	Sep 29 10:50:14 addons-529420 kubelet[2313]: E0929 10:50:14.715562    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="f831007e-4eb0-4f0f-b585-d846fca29f0a"
	Sep 29 10:50:16 addons-529420 kubelet[2313]: I0929 10:50:16.391129    2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/b17506ae-7246-4791-9e59-d2d2220b079a-script\") pod \"helper-pod-create-pvc-dc48f0d2-8814-47d1-a731-5ee587529e41\" (UID: \"b17506ae-7246-4791-9e59-d2d2220b079a\") " pod="local-path-storage/helper-pod-create-pvc-dc48f0d2-8814-47d1-a731-5ee587529e41"
	Sep 29 10:50:16 addons-529420 kubelet[2313]: I0929 10:50:16.391201    2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzf5x\" (UniqueName: \"kubernetes.io/projected/b17506ae-7246-4791-9e59-d2d2220b079a-kube-api-access-wzf5x\") pod \"helper-pod-create-pvc-dc48f0d2-8814-47d1-a731-5ee587529e41\" (UID: \"b17506ae-7246-4791-9e59-d2d2220b079a\") " pod="local-path-storage/helper-pod-create-pvc-dc48f0d2-8814-47d1-a731-5ee587529e41"
	Sep 29 10:50:16 addons-529420 kubelet[2313]: I0929 10:50:16.391256    2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/b17506ae-7246-4791-9e59-d2d2220b079a-data\") pod \"helper-pod-create-pvc-dc48f0d2-8814-47d1-a731-5ee587529e41\" (UID: \"b17506ae-7246-4791-9e59-d2d2220b079a\") " pod="local-path-storage/helper-pod-create-pvc-dc48f0d2-8814-47d1-a731-5ee587529e41"
	Sep 29 10:50:17 addons-529420 kubelet[2313]: E0929 10:50:17.835048    2313 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 10:50:17 addons-529420 kubelet[2313]: E0929 10:50:17.835104    2313 kuberuntime_image.go:43] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 10:50:17 addons-529420 kubelet[2313]: E0929 10:50:17.835218    2313 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-dc48f0d2-8814-47d1-a731-5ee587529e41_local-path-storage(b17506ae-7246-4791-9e59-d2d2220b079a): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 10:50:17 addons-529420 kubelet[2313]: E0929 10:50:17.835268    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-dc48f0d2-8814-47d1-a731-5ee587529e41" podUID="b17506ae-7246-4791-9e59-d2d2220b079a"
	Sep 29 10:50:18 addons-529420 kubelet[2313]: E0929 10:50:18.532335    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-dc48f0d2-8814-47d1-a731-5ee587529e41" podUID="b17506ae-7246-4791-9e59-d2d2220b079a"
	Sep 29 10:50:19 addons-529420 kubelet[2313]: I0929 10:50:19.713534    2313 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-5qvcx" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 10:50:22 addons-529420 kubelet[2313]: E0929 10:50:22.712908    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="f53396f1-c1a5-46b0-8089-67b5c438ba7b"
	Sep 29 10:50:26 addons-529420 kubelet[2313]: E0929 10:50:26.715239    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="f831007e-4eb0-4f0f-b585-d846fca29f0a"
	
	
	==> storage-provisioner [7292e19e493f] <==
	W0929 10:50:06.299212       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:50:08.302618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:50:08.306862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:50:10.310807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:50:10.314875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:50:12.318302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:50:12.322956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:50:14.326044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:50:14.330188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:50:16.332953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:50:16.337505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:50:18.340350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:50:18.345997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:50:20.349228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:50:20.353273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:50:22.356908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:50:22.361153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:50:24.365150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:50:24.369446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:50:26.372750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:50:26.377657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:50:28.380857       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:50:28.386074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:50:30.389175       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:50:30.393881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [910e07bc5455] <==
	I0929 10:36:56.614357       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 10:37:26.626869       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-529420 -n addons-529420
helpers_test.go:269: (dbg) Run:  kubectl --context addons-529420 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-tj9td ingress-nginx-admission-patch-ncp6w helper-pod-create-pvc-dc48f0d2-8814-47d1-a731-5ee587529e41
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-529420 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-tj9td ingress-nginx-admission-patch-ncp6w helper-pod-create-pvc-dc48f0d2-8814-47d1-a731-5ee587529e41
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-529420 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-tj9td ingress-nginx-admission-patch-ncp6w helper-pod-create-pvc-dc48f0d2-8814-47d1-a731-5ee587529e41: exit status 1 (90.468985ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-529420/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 10:45:37 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.31
	IPs:
	  IP:  10.244.0.31
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nbt6z (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-nbt6z:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  4m55s                 default-scheduler  Successfully assigned default/nginx to addons-529420
	  Normal   Pulling    2m (x5 over 4m54s)    kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     119s (x5 over 4m53s)  kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     119s (x5 over 4m53s)  kubelet            Error: ErrImagePull
	  Warning  Failed     55s (x15 over 4m53s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    6s (x19 over 4m53s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-529420/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 10:45:29 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xxqpd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-xxqpd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  5m3s                  default-scheduler  Successfully assigned default/task-pv-pod to addons-529420
	  Warning  Failed     5m1s                  kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    117s (x5 over 5m2s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     116s (x5 over 5m1s)   kubelet            Error: ErrImagePull
	  Warning  Failed     116s (x4 over 4m45s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     61s (x15 over 5m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    10s (x19 over 5m)     kubelet            Back-off pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-29hwq (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-29hwq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-tj9td" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-ncp6w" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-dc48f0d2-8814-47d1-a731-5ee587529e41" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-529420 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-tj9td ingress-nginx-admission-patch-ncp6w helper-pod-create-pvc-dc48f0d2-8814-47d1-a731-5ee587529e41: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-529420 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-529420 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.837521412s)
--- FAIL: TestAddons/parallel/LocalPath (345.10s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (127.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-5thg8" [23d68edd-cead-477c-97b6-f1a7f8b03cbd] Pending / Ready:ContainersNotReady (containers with unready status: [yakd]) / ContainersReady:ContainersNotReady (containers with unready status: [yakd])
addons_test.go:1047: ***** TestAddons/parallel/Yakd: pod "app.kubernetes.io/name=yakd-dashboard" failed to start within 2m0s: context deadline exceeded ****
addons_test.go:1047: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-529420 -n addons-529420
addons_test.go:1047: TestAddons/parallel/Yakd: showing logs for failed pods as of 2025-09-29 10:47:22.057889789 +0000 UTC m=+695.893808606
addons_test.go:1047: (dbg) Run:  kubectl --context addons-529420 describe po yakd-dashboard-5ff678cb9-5thg8 -n yakd-dashboard
addons_test.go:1047: (dbg) kubectl --context addons-529420 describe po yakd-dashboard-5ff678cb9-5thg8 -n yakd-dashboard:
Name:             yakd-dashboard-5ff678cb9-5thg8
Namespace:        yakd-dashboard
Priority:         0
Service Account:  yakd-dashboard
Node:             addons-529420/192.168.49.2
Start Time:       Mon, 29 Sep 2025 10:36:55 +0000
Labels:           app.kubernetes.io/instance=yakd-dashboard
app.kubernetes.io/name=yakd-dashboard
gcp-auth-skip-secret=true
pod-template-hash=5ff678cb9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/yakd-dashboard-5ff678cb9
Containers:
yakd:
Container ID:   
Image:          docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624
Image ID:       
Port:           8080/TCP (http)
Host Port:      0/TCP (http)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
memory:  256Mi
Requests:
memory:   128Mi
Liveness:   http-get http://:8080/ delay=10s timeout=10s period=10s #success=1 #failure=3
Readiness:  http-get http://:8080/ delay=10s timeout=10s period=10s #success=1 #failure=3
Environment:
KUBERNETES_NAMESPACE:  yakd-dashboard (v1:metadata.namespace)
HOSTNAME:              yakd-dashboard-5ff678cb9-5thg8 (v1:metadata.name)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6wdmd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-6wdmd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned yakd-dashboard/yakd-dashboard-5ff678cb9-5thg8 to addons-529420
Warning  Failed     9m55s                   kubelet            Failed to pull image "docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    6m57s (x5 over 10m)     kubelet            Pulling image "docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624"
Warning  Failed     6m55s (x5 over 9m55s)   kubelet            Error: ErrImagePull
Warning  Failed     6m55s (x4 over 9m42s)   kubelet            Failed to pull image "docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     4m46s (x21 over 9m54s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    14s (x41 over 9m54s)    kubelet            Back-off pulling image "docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624"
addons_test.go:1047: (dbg) Run:  kubectl --context addons-529420 logs yakd-dashboard-5ff678cb9-5thg8 -n yakd-dashboard
addons_test.go:1047: (dbg) Non-zero exit: kubectl --context addons-529420 logs yakd-dashboard-5ff678cb9-5thg8 -n yakd-dashboard: exit status 1 (67.783554ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "yakd" in pod "yakd-dashboard-5ff678cb9-5thg8" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:1047: kubectl --context addons-529420 logs yakd-dashboard-5ff678cb9-5thg8 -n yakd-dashboard: exit status 1
addons_test.go:1048: failed waiting for YAKD - Kubernetes Dashboard pod: app.kubernetes.io/name=yakd-dashboard within 2m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Yakd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Yakd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-529420
helpers_test.go:243: (dbg) docker inspect addons-529420:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c119ca8ea69036edc3cadb6923f83c5ce0ad278d98abd07121e74c6603626b70",
	        "Created": "2025-09-29T10:36:32.534326886Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 809726,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T10:36:32.572013527Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/c119ca8ea69036edc3cadb6923f83c5ce0ad278d98abd07121e74c6603626b70/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c119ca8ea69036edc3cadb6923f83c5ce0ad278d98abd07121e74c6603626b70/hostname",
	        "HostsPath": "/var/lib/docker/containers/c119ca8ea69036edc3cadb6923f83c5ce0ad278d98abd07121e74c6603626b70/hosts",
	        "LogPath": "/var/lib/docker/containers/c119ca8ea69036edc3cadb6923f83c5ce0ad278d98abd07121e74c6603626b70/c119ca8ea69036edc3cadb6923f83c5ce0ad278d98abd07121e74c6603626b70-json.log",
	        "Name": "/addons-529420",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-529420:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-529420",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c119ca8ea69036edc3cadb6923f83c5ce0ad278d98abd07121e74c6603626b70",
	                "LowerDir": "/var/lib/docker/overlay2/6a6aa4b945b92dc7997627af83b36f77536ae206c022f4b5596a3201308a5d21-init/diff:/var/lib/docker/overlay2/8785ead96546661033f04e32798b1ac8b184b1c04fc8862515b87d56909b6b9d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6a6aa4b945b92dc7997627af83b36f77536ae206c022f4b5596a3201308a5d21/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6a6aa4b945b92dc7997627af83b36f77536ae206c022f4b5596a3201308a5d21/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6a6aa4b945b92dc7997627af83b36f77536ae206c022f4b5596a3201308a5d21/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-529420",
	                "Source": "/var/lib/docker/volumes/addons-529420/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-529420",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-529420",
	                "name.minikube.sigs.k8s.io": "addons-529420",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "198f81b793e83c57ca2854cf8d1d0464141313e0495f23d0367891bbb8b05aa2",
	            "SandboxKey": "/var/run/docker/netns/198f81b793e8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33519"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33520"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33523"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33521"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33522"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-529420": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:86:fe:b5:4c:80",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3724749a501df880fce098abbef40c1fc5e7d4b0ca633d7b7b33237babda66c6",
	                    "EndpointID": "381be5dcde6efd6127536cd89de924bde7f3b532aa3b84038bdaf13a91eea4d0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-529420",
	                        "c119ca8ea690"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-529420 -n addons-529420
helpers_test.go:252: <<< TestAddons/parallel/Yakd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Yakd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-529420 logs -n 25
helpers_test.go:260: TestAddons/parallel/Yakd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                    ARGS                                                                                                                                                                                                                                    │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-079557 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                              │ download-only-079557   │ jenkins │ v1.37.0 │ 29 Sep 25 10:35 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ minikube               │ jenkins │ v1.37.0 │ 29 Sep 25 10:35 UTC │ 29 Sep 25 10:35 UTC │
	│ delete  │ -p download-only-079557                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-079557   │ jenkins │ v1.37.0 │ 29 Sep 25 10:35 UTC │ 29 Sep 25 10:35 UTC │
	│ start   │ -o=json --download-only -p download-only-239254 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                              │ download-only-239254   │ jenkins │ v1.37.0 │ 29 Sep 25 10:35 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ minikube               │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │ 29 Sep 25 10:36 UTC │
	│ delete  │ -p download-only-239254                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-239254   │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │ 29 Sep 25 10:36 UTC │
	│ delete  │ -p download-only-079557                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-079557   │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │ 29 Sep 25 10:36 UTC │
	│ delete  │ -p download-only-239254                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-239254   │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │ 29 Sep 25 10:36 UTC │
	│ start   │ --download-only -p download-docker-537454 --alsologtostderr --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-537454 │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │                     │
	│ delete  │ -p download-docker-537454                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-docker-537454 │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │ 29 Sep 25 10:36 UTC │
	│ start   │ --download-only -p binary-mirror-606832 --alsologtostderr --binary-mirror http://127.0.0.1:41679 --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                               │ binary-mirror-606832   │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │                     │
	│ delete  │ -p binary-mirror-606832                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ binary-mirror-606832   │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │ 29 Sep 25 10:36 UTC │
	│ addons  │ enable dashboard -p addons-529420                                                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │                     │
	│ addons  │ disable dashboard -p addons-529420                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │                     │
	│ start   │ -p addons-529420 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:36 UTC │ 29 Sep 25 10:38 UTC │
	│ addons  │ addons-529420 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:44 UTC │ 29 Sep 25 10:44 UTC │
	│ addons  │ addons-529420 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ addons  │ enable headlamp -p addons-529420 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ addons  │ addons-529420 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ addons  │ addons-529420 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ addons  │ addons-529420 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ ip      │ addons-529420 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ addons  │ addons-529420 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ addons  │ addons-529420 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	│ addons  │ addons-529420 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-529420          │ jenkins │ v1.37.0 │ 29 Sep 25 10:45 UTC │ 29 Sep 25 10:45 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 10:36:12
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 10:36:12.344032  809094 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:36:12.344307  809094 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:36:12.344317  809094 out.go:374] Setting ErrFile to fd 2...
	I0929 10:36:12.344322  809094 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:36:12.344556  809094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-804126/.minikube/bin
	I0929 10:36:12.345145  809094 out.go:368] Setting JSON to false
	I0929 10:36:12.346064  809094 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":8313,"bootTime":1759133859,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:36:12.346178  809094 start.go:140] virtualization: kvm guest
	I0929 10:36:12.347964  809094 out.go:179] * [addons-529420] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 10:36:12.349153  809094 out.go:179]   - MINIKUBE_LOCATION=21656
	I0929 10:36:12.349158  809094 notify.go:220] Checking for updates...
	I0929 10:36:12.351412  809094 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:36:12.352567  809094 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21656-804126/kubeconfig
	I0929 10:36:12.353660  809094 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-804126/.minikube
	I0929 10:36:12.354776  809094 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 10:36:12.356142  809094 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 10:36:12.357739  809094 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:36:12.381211  809094 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 10:36:12.381322  809094 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:36:12.437520  809094 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-29 10:36:12.425260973 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:36:12.437676  809094 docker.go:318] overlay module found
	I0929 10:36:12.439238  809094 out.go:179] * Using the docker driver based on user configuration
	I0929 10:36:12.440251  809094 start.go:304] selected driver: docker
	I0929 10:36:12.440267  809094 start.go:924] validating driver "docker" against <nil>
	I0929 10:36:12.440280  809094 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 10:36:12.440926  809094 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:36:12.500513  809094 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-29 10:36:12.490511662 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:36:12.500726  809094 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 10:36:12.500930  809094 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 10:36:12.502519  809094 out.go:179] * Using Docker driver with root privileges
	I0929 10:36:12.503551  809094 cni.go:84] Creating CNI manager for ""
	I0929 10:36:12.503623  809094 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 10:36:12.503635  809094 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0929 10:36:12.503733  809094 start.go:348] cluster config:
	{Name:addons-529420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-529420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netwo
rkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1
m0s}
	I0929 10:36:12.504920  809094 out.go:179] * Starting "addons-529420" primary control-plane node in "addons-529420" cluster
	I0929 10:36:12.505901  809094 cache.go:123] Beginning downloading kic base image for docker with docker
	I0929 10:36:12.506980  809094 out.go:179] * Pulling base image v0.0.48 ...
	I0929 10:36:12.508072  809094 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 10:36:12.508109  809094 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 10:36:12.508127  809094 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21656-804126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0929 10:36:12.508138  809094 cache.go:58] Caching tarball of preloaded images
	I0929 10:36:12.508233  809094 preload.go:172] Found /home/jenkins/minikube-integration/21656-804126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0929 10:36:12.508247  809094 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0929 10:36:12.508686  809094 profile.go:143] Saving config to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/config.json ...
	I0929 10:36:12.508722  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/config.json: {Name:mkf2bdb1c1bab8c96c543722d15e10cff077f479 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:12.525703  809094 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 10:36:12.525838  809094 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0929 10:36:12.525859  809094 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0929 10:36:12.525863  809094 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0929 10:36:12.525870  809094 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0929 10:36:12.525876  809094 cache.go:165] Loading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from local cache
	I0929 10:36:25.658111  809094 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from cached tarball
	I0929 10:36:25.658161  809094 cache.go:232] Successfully downloaded all kic artifacts
	I0929 10:36:25.658213  809094 start.go:360] acquireMachinesLock for addons-529420: {Name:mkb1d52da565bdfba9b5d34f2b7f2b7e0a10b5f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 10:36:25.658335  809094 start.go:364] duration metric: took 99.814µs to acquireMachinesLock for "addons-529420"
	I0929 10:36:25.658362  809094 start.go:93] Provisioning new machine with config: &{Name:addons-529420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-529420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 10:36:25.658452  809094 start.go:125] createHost starting for "" (driver="docker")
	I0929 10:36:25.660487  809094 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0929 10:36:25.660770  809094 start.go:159] libmachine.API.Create for "addons-529420" (driver="docker")
	I0929 10:36:25.660815  809094 client.go:168] LocalClient.Create starting
	I0929 10:36:25.660956  809094 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca.pem
	I0929 10:36:25.875289  809094 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/cert.pem
	I0929 10:36:26.406689  809094 cli_runner.go:164] Run: docker network inspect addons-529420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0929 10:36:26.424628  809094 cli_runner.go:211] docker network inspect addons-529420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0929 10:36:26.424709  809094 network_create.go:284] running [docker network inspect addons-529420] to gather additional debugging logs...
	I0929 10:36:26.424730  809094 cli_runner.go:164] Run: docker network inspect addons-529420
	W0929 10:36:26.443146  809094 cli_runner.go:211] docker network inspect addons-529420 returned with exit code 1
	I0929 10:36:26.443184  809094 network_create.go:287] error running [docker network inspect addons-529420]: docker network inspect addons-529420: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-529420 not found
	I0929 10:36:26.443196  809094 network_create.go:289] output of [docker network inspect addons-529420]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-529420 not found
	
	** /stderr **
	I0929 10:36:26.443292  809094 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 10:36:26.462904  809094 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001bea4c0}
	I0929 10:36:26.462945  809094 network_create.go:124] attempt to create docker network addons-529420 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0929 10:36:26.462999  809094 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-529420 addons-529420
	I0929 10:36:26.525434  809094 network_create.go:108] docker network addons-529420 192.168.49.0/24 created
	I0929 10:36:26.525484  809094 kic.go:121] calculated static IP "192.168.49.2" for the "addons-529420" container
	I0929 10:36:26.525595  809094 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0929 10:36:26.543028  809094 cli_runner.go:164] Run: docker volume create addons-529420 --label name.minikube.sigs.k8s.io=addons-529420 --label created_by.minikube.sigs.k8s.io=true
	I0929 10:36:26.562375  809094 oci.go:103] Successfully created a docker volume addons-529420
	I0929 10:36:26.562495  809094 cli_runner.go:164] Run: docker run --rm --name addons-529420-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-529420 --entrypoint /usr/bin/test -v addons-529420:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0929 10:36:28.727983  809094 cli_runner.go:217] Completed: docker run --rm --name addons-529420-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-529420 --entrypoint /usr/bin/test -v addons-529420:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib: (2.165422344s)
	I0929 10:36:28.728035  809094 oci.go:107] Successfully prepared a docker volume addons-529420
	I0929 10:36:28.728077  809094 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 10:36:28.728109  809094 kic.go:194] Starting extracting preloaded images to volume ...
	I0929 10:36:28.728393  809094 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21656-804126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-529420:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0929 10:36:32.463008  809094 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21656-804126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-529420:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.734545448s)
	I0929 10:36:32.463043  809094 kic.go:203] duration metric: took 3.734930316s to extract preloaded images to volume ...
	W0929 10:36:32.463138  809094 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0929 10:36:32.463171  809094 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0929 10:36:32.463213  809094 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0929 10:36:32.517633  809094 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-529420 --name addons-529420 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-529420 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-529420 --network addons-529420 --ip 192.168.49.2 --volume addons-529420:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0929 10:36:32.782605  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Running}}
	I0929 10:36:32.802682  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:32.821845  809094 cli_runner.go:164] Run: docker exec addons-529420 stat /var/lib/dpkg/alternatives/iptables
	I0929 10:36:32.871577  809094 oci.go:144] the created container "addons-529420" has a running status.
	I0929 10:36:32.871612  809094 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa...
	I0929 10:36:33.119788  809094 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0929 10:36:33.151014  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:33.170919  809094 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0929 10:36:33.170942  809094 kic_runner.go:114] Args: [docker exec --privileged addons-529420 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0929 10:36:33.215532  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:33.236236  809094 machine.go:93] provisionDockerMachine start ...
	I0929 10:36:33.236361  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:33.258174  809094 main.go:141] libmachine: Using SSH client type: native
	I0929 10:36:33.258466  809094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33519 <nil> <nil>}
	I0929 10:36:33.258486  809094 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 10:36:33.399100  809094 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-529420
	
	I0929 10:36:33.399136  809094 ubuntu.go:182] provisioning hostname "addons-529420"
	I0929 10:36:33.399203  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:33.418829  809094 main.go:141] libmachine: Using SSH client type: native
	I0929 10:36:33.419101  809094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33519 <nil> <nil>}
	I0929 10:36:33.419124  809094 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-529420 && echo "addons-529420" | sudo tee /etc/hostname
	I0929 10:36:33.570695  809094 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-529420
	
	I0929 10:36:33.570813  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:33.590267  809094 main.go:141] libmachine: Using SSH client type: native
	I0929 10:36:33.590566  809094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33519 <nil> <nil>}
	I0929 10:36:33.590590  809094 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-529420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-529420/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-529420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 10:36:33.728831  809094 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 10:36:33.728862  809094 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21656-804126/.minikube CaCertPath:/home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21656-804126/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21656-804126/.minikube}
	I0929 10:36:33.728905  809094 ubuntu.go:190] setting up certificates
	I0929 10:36:33.728918  809094 provision.go:84] configureAuth start
	I0929 10:36:33.728972  809094 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-529420
	I0929 10:36:33.748261  809094 provision.go:143] copyHostCerts
	I0929 10:36:33.748352  809094 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21656-804126/.minikube/ca.pem (1082 bytes)
	I0929 10:36:33.748507  809094 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21656-804126/.minikube/cert.pem (1123 bytes)
	I0929 10:36:33.748577  809094 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21656-804126/.minikube/key.pem (1679 bytes)
	I0929 10:36:33.748631  809094 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21656-804126/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca-key.pem org=jenkins.addons-529420 san=[127.0.0.1 192.168.49.2 addons-529420 localhost minikube]
	I0929 10:36:33.869211  809094 provision.go:177] copyRemoteCerts
	I0929 10:36:33.869279  809094 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 10:36:33.869324  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:33.888791  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:33.988019  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 10:36:34.017500  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 10:36:34.045429  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 10:36:34.073342  809094 provision.go:87] duration metric: took 344.406928ms to configureAuth
	I0929 10:36:34.073378  809094 ubuntu.go:206] setting minikube options for container-runtime
	I0929 10:36:34.073606  809094 config.go:182] Loaded profile config "addons-529420": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 10:36:34.073667  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:34.093622  809094 main.go:141] libmachine: Using SSH client type: native
	I0929 10:36:34.093924  809094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33519 <nil> <nil>}
	I0929 10:36:34.093946  809094 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0929 10:36:34.233714  809094 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0929 10:36:34.233744  809094 ubuntu.go:71] root file system type: overlay
	I0929 10:36:34.233863  809094 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0929 10:36:34.233924  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:34.253864  809094 main.go:141] libmachine: Using SSH client type: native
	I0929 10:36:34.254082  809094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33519 <nil> <nil>}
	I0929 10:36:34.254160  809094 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0929 10:36:34.407267  809094 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0929 10:36:34.407373  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:34.426330  809094 main.go:141] libmachine: Using SSH client type: native
	I0929 10:36:34.426664  809094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33519 <nil> <nil>}
	I0929 10:36:34.426694  809094 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0929 10:36:35.600389  809094 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-29 10:36:34.405290761 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0929 10:36:35.600442  809094 machine.go:96] duration metric: took 2.364149235s to provisionDockerMachine
	I0929 10:36:35.600461  809094 client.go:171] duration metric: took 9.939636569s to LocalClient.Create
	I0929 10:36:35.600487  809094 start.go:167] duration metric: took 9.939718481s to libmachine.API.Create "addons-529420"
	I0929 10:36:35.600498  809094 start.go:293] postStartSetup for "addons-529420" (driver="docker")
	I0929 10:36:35.600507  809094 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 10:36:35.600574  809094 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 10:36:35.600613  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:35.620381  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:35.722159  809094 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 10:36:35.726032  809094 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 10:36:35.726071  809094 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 10:36:35.726079  809094 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 10:36:35.726086  809094 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 10:36:35.726099  809094 filesync.go:126] Scanning /home/jenkins/minikube-integration/21656-804126/.minikube/addons for local assets ...
	I0929 10:36:35.726162  809094 filesync.go:126] Scanning /home/jenkins/minikube-integration/21656-804126/.minikube/files for local assets ...
	I0929 10:36:35.726189  809094 start.go:296] duration metric: took 125.685891ms for postStartSetup
	I0929 10:36:35.726542  809094 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-529420
	I0929 10:36:35.746842  809094 profile.go:143] Saving config to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/config.json ...
	I0929 10:36:35.747139  809094 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 10:36:35.747190  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:35.766493  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:35.861001  809094 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 10:36:35.865987  809094 start.go:128] duration metric: took 10.207507252s to createHost
	I0929 10:36:35.866014  809094 start.go:83] releasing machines lock for "addons-529420", held for 10.207665879s
	I0929 10:36:35.866077  809094 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-529420
	I0929 10:36:35.884982  809094 ssh_runner.go:195] Run: cat /version.json
	I0929 10:36:35.885044  809094 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 10:36:35.885050  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:35.885102  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:35.904087  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:35.905861  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:36.079122  809094 ssh_runner.go:195] Run: systemctl --version
	I0929 10:36:36.084194  809094 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 10:36:36.089268  809094 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 10:36:36.123533  809094 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 10:36:36.123613  809094 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 10:36:36.154267  809094 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0929 10:36:36.154304  809094 start.go:495] detecting cgroup driver to use...
	I0929 10:36:36.154338  809094 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 10:36:36.154522  809094 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 10:36:36.173730  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 10:36:36.186991  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 10:36:36.198504  809094 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0929 10:36:36.198583  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0929 10:36:36.210007  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 10:36:36.221306  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 10:36:36.232714  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 10:36:36.244397  809094 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 10:36:36.255307  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 10:36:36.266876  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 10:36:36.278201  809094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 10:36:36.289714  809094 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 10:36:36.299414  809094 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 10:36:36.309736  809094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:36:36.379690  809094 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 10:36:36.458212  809094 start.go:495] detecting cgroup driver to use...
	I0929 10:36:36.458261  809094 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 10:36:36.458313  809094 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0929 10:36:36.471829  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 10:36:36.484463  809094 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 10:36:36.501567  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 10:36:36.514414  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 10:36:36.527777  809094 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 10:36:36.546772  809094 ssh_runner.go:195] Run: which cri-dockerd
	I0929 10:36:36.550960  809094 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0929 10:36:36.562724  809094 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0929 10:36:36.582736  809094 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0929 10:36:36.653862  809094 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0929 10:36:36.724043  809094 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0929 10:36:36.724204  809094 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0929 10:36:36.744794  809094 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 10:36:36.757465  809094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:36:36.826884  809094 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 10:36:37.610007  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 10:36:37.622898  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0929 10:36:37.636301  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 10:36:37.649738  809094 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0929 10:36:37.722666  809094 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0929 10:36:37.796505  809094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:36:37.867566  809094 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0929 10:36:37.892266  809094 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0929 10:36:37.904989  809094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:36:37.972029  809094 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0929 10:36:38.051455  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 10:36:38.065182  809094 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0929 10:36:38.065259  809094 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0929 10:36:38.069600  809094 start.go:563] Will wait 60s for crictl version
	I0929 10:36:38.069675  809094 ssh_runner.go:195] Run: which crictl
	I0929 10:36:38.073569  809094 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 10:36:38.110640  809094 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0929 10:36:38.110713  809094 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 10:36:38.140525  809094 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 10:36:38.168486  809094 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0929 10:36:38.168581  809094 cli_runner.go:164] Run: docker network inspect addons-529420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 10:36:38.186777  809094 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0929 10:36:38.191260  809094 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 10:36:38.203969  809094 kubeadm.go:875] updating cluster {Name:addons-529420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-529420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 10:36:38.204104  809094 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 10:36:38.204155  809094 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 10:36:38.227008  809094 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0929 10:36:38.227036  809094 docker.go:621] Images already preloaded, skipping extraction
	I0929 10:36:38.227106  809094 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 10:36:38.249601  809094 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0929 10:36:38.249632  809094 cache_images.go:85] Images are preloaded, skipping loading
	I0929 10:36:38.249646  809094 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0929 10:36:38.249774  809094 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-529420 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-529420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 10:36:38.249838  809094 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0929 10:36:38.304936  809094 cni.go:84] Creating CNI manager for ""
	I0929 10:36:38.304974  809094 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 10:36:38.305005  809094 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 10:36:38.305052  809094 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-529420 NodeName:addons-529420 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 10:36:38.305228  809094 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-529420"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 10:36:38.305304  809094 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 10:36:38.315730  809094 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 10:36:38.315813  809094 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 10:36:38.326318  809094 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0929 10:36:38.346540  809094 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 10:36:38.366500  809094 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I0929 10:36:38.386461  809094 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0929 10:36:38.390498  809094 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 10:36:38.403052  809094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:36:38.473834  809094 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 10:36:38.499163  809094 certs.go:68] Setting up /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420 for IP: 192.168.49.2
	I0929 10:36:38.499185  809094 certs.go:194] generating shared ca certs ...
	I0929 10:36:38.499202  809094 certs.go:226] acquiring lock for ca certs: {Name:mkc1a6b9be22ca2c6acbf4ae075c8649cd7bb029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:38.499334  809094 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21656-804126/.minikube/ca.key
	I0929 10:36:38.646613  809094 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-804126/.minikube/ca.crt ...
	I0929 10:36:38.646650  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/ca.crt: {Name:mk6af424fad818aae2ecb63097cb3cf4fa072fda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:38.646838  809094 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-804126/.minikube/ca.key ...
	I0929 10:36:38.646852  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/ca.key: {Name:mk82f55830669e727af6d6e5b82afa28223d781a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:38.646926  809094 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21656-804126/.minikube/proxy-client-ca.key
	I0929 10:36:38.721965  809094 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-804126/.minikube/proxy-client-ca.crt ...
	I0929 10:36:38.721996  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/proxy-client-ca.crt: {Name:mkfb52e92ed776fc971f1dc2b44df4c3ddc05e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:38.722184  809094 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-804126/.minikube/proxy-client-ca.key ...
	I0929 10:36:38.722197  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/proxy-client-ca.key: {Name:mk1e237f25fee0cc4ff615a96b95c3d6470c2cbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:38.722265  809094 certs.go:256] generating profile certs ...
	I0929 10:36:38.722327  809094 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.key
	I0929 10:36:38.722341  809094 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt with IP's: []
	I0929 10:36:39.380758  809094 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt ...
	I0929 10:36:39.380790  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt: {Name:mkc77ef14ddf2986f76245d42c12726cef377eb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:39.380976  809094 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.key ...
	I0929 10:36:39.380988  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.key: {Name:mke6b630fea371c2903c210193a2d24b7bb1f06f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:39.381077  809094 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.key.5ad07b27
	I0929 10:36:39.381099  809094 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.crt.5ad07b27 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0929 10:36:39.520731  809094 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.crt.5ad07b27 ...
	I0929 10:36:39.520767  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.crt.5ad07b27: {Name:mkcca52b9df73c7117f5aaacf4f623dd6c9d43c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:39.520941  809094 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.key.5ad07b27 ...
	I0929 10:36:39.520956  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.key.5ad07b27: {Name:mk26b5e71bc5d928280847a07112275ef7de7abe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:39.521026  809094 certs.go:381] copying /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.crt.5ad07b27 -> /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.crt
	I0929 10:36:39.521153  809094 certs.go:385] copying /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.key.5ad07b27 -> /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.key
	I0929 10:36:39.521216  809094 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.key
	I0929 10:36:39.521237  809094 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.crt with IP's: []
	I0929 10:36:39.588037  809094 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.crt ...
	I0929 10:36:39.588071  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.crt: {Name:mkf924850f5da9a86daf75692dc66662efd3fc9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:39.588249  809094 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.key ...
	I0929 10:36:39.588264  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.key: {Name:mkecb516723c573c2dd57316eed6cd1d6263ba08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:39.588475  809094 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 10:36:39.588513  809094 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/ca.pem (1082 bytes)
	I0929 10:36:39.588537  809094 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/cert.pem (1123 bytes)
	I0929 10:36:39.588557  809094 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-804126/.minikube/certs/key.pem (1679 bytes)
	I0929 10:36:39.589229  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 10:36:39.617084  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0929 10:36:39.644094  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 10:36:39.671391  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 10:36:39.698770  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0929 10:36:39.726395  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 10:36:39.753719  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 10:36:39.781288  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 10:36:39.808573  809094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-804126/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 10:36:39.838955  809094 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 10:36:39.859027  809094 ssh_runner.go:195] Run: openssl version
	I0929 10:36:39.865345  809094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 10:36:39.879241  809094 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:36:39.883476  809094 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:36:39.883550  809094 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:36:39.891125  809094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 10:36:39.902246  809094 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 10:36:39.906800  809094 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 10:36:39.906853  809094 kubeadm.go:392] StartCluster: {Name:addons-529420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-529420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketV
MnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:36:39.907037  809094 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0929 10:36:39.927225  809094 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 10:36:39.937579  809094 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 10:36:39.948182  809094 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0929 10:36:39.948260  809094 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 10:36:39.958474  809094 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 10:36:39.958499  809094 kubeadm.go:157] found existing configuration files:
	
	I0929 10:36:39.958547  809094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 10:36:39.968562  809094 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 10:36:39.968615  809094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 10:36:39.978358  809094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 10:36:39.988499  809094 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 10:36:39.988557  809094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 10:36:39.998306  809094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 10:36:40.008316  809094 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 10:36:40.008394  809094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 10:36:40.018692  809094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 10:36:40.028624  809094 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 10:36:40.028694  809094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 10:36:40.038485  809094 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0929 10:36:40.113436  809094 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1040-gcp\n", err: exit status 1
	I0929 10:36:40.172082  809094 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 10:36:49.487259  809094 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0929 10:36:49.487349  809094 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 10:36:49.487474  809094 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0929 10:36:49.487549  809094 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1040-gcp
	I0929 10:36:49.487599  809094 kubeadm.go:310] OS: Linux
	I0929 10:36:49.487674  809094 kubeadm.go:310] CGROUPS_CPU: enabled
	I0929 10:36:49.487738  809094 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0929 10:36:49.487808  809094 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0929 10:36:49.487877  809094 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0929 10:36:49.487945  809094 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0929 10:36:49.488009  809094 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0929 10:36:49.488084  809094 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0929 10:36:49.488155  809094 kubeadm.go:310] CGROUPS_IO: enabled
	I0929 10:36:49.488236  809094 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 10:36:49.488334  809094 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 10:36:49.488489  809094 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 10:36:49.488588  809094 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0929 10:36:49.490162  809094 out.go:252]   - Generating certificates and keys ...
	I0929 10:36:49.490252  809094 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 10:36:49.490322  809094 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 10:36:49.490390  809094 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 10:36:49.490463  809094 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 10:36:49.490546  809094 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 10:36:49.490627  809094 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 10:36:49.490693  809094 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 10:36:49.490785  809094 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-529420 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 10:36:49.490833  809094 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 10:36:49.490923  809094 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-529420 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 10:36:49.490993  809094 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 10:36:49.491078  809094 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 10:36:49.491125  809094 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 10:36:49.491174  809094 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 10:36:49.491224  809094 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 10:36:49.491279  809094 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 10:36:49.491356  809094 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 10:36:49.491494  809094 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 10:36:49.491545  809094 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 10:36:49.491621  809094 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 10:36:49.491692  809094 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 10:36:49.493671  809094 out.go:252]   - Booting up control plane ...
	I0929 10:36:49.493764  809094 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 10:36:49.493839  809094 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 10:36:49.493913  809094 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 10:36:49.494005  809094 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 10:36:49.494093  809094 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 10:36:49.494209  809094 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 10:36:49.494311  809094 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 10:36:49.494360  809094 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 10:36:49.494507  809094 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 10:36:49.494615  809094 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0929 10:36:49.494666  809094 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001039286s
	I0929 10:36:49.494753  809094 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 10:36:49.494843  809094 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0929 10:36:49.494970  809094 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 10:36:49.495066  809094 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 10:36:49.495136  809094 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.704142157s
	I0929 10:36:49.495206  809094 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.204406716s
	I0929 10:36:49.495290  809094 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 3.502382535s
	I0929 10:36:49.495410  809094 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 10:36:49.495564  809094 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 10:36:49.495618  809094 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 10:36:49.495799  809094 kubeadm.go:310] [mark-control-plane] Marking the node addons-529420 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 10:36:49.495870  809094 kubeadm.go:310] [bootstrap-token] Using token: yuv8vz.ch07v7kt1fq2e5wb
	I0929 10:36:49.498139  809094 out.go:252]   - Configuring RBAC rules ...
	I0929 10:36:49.498247  809094 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 10:36:49.498343  809094 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 10:36:49.498483  809094 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 10:36:49.498597  809094 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 10:36:49.498726  809094 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 10:36:49.498844  809094 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 10:36:49.498986  809094 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 10:36:49.499027  809094 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 10:36:49.499068  809094 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 10:36:49.499073  809094 kubeadm.go:310] 
	I0929 10:36:49.499124  809094 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 10:36:49.499129  809094 kubeadm.go:310] 
	I0929 10:36:49.499199  809094 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 10:36:49.499205  809094 kubeadm.go:310] 
	I0929 10:36:49.499230  809094 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 10:36:49.499295  809094 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 10:36:49.499340  809094 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 10:36:49.499345  809094 kubeadm.go:310] 
	I0929 10:36:49.499394  809094 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 10:36:49.499400  809094 kubeadm.go:310] 
	I0929 10:36:49.499496  809094 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 10:36:49.499503  809094 kubeadm.go:310] 
	I0929 10:36:49.499547  809094 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 10:36:49.499632  809094 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 10:36:49.499694  809094 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 10:36:49.499700  809094 kubeadm.go:310] 
	I0929 10:36:49.499768  809094 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 10:36:49.499834  809094 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 10:36:49.499840  809094 kubeadm.go:310] 
	I0929 10:36:49.499917  809094 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token yuv8vz.ch07v7kt1fq2e5wb \
	I0929 10:36:49.500007  809094 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:52ed2ed4898a1f91b8e8d0663d1b9110ba480fd365338f5c4892399f9ccfb123 \
	I0929 10:36:49.500027  809094 kubeadm.go:310] 	--control-plane 
	I0929 10:36:49.500033  809094 kubeadm.go:310] 
	I0929 10:36:49.500105  809094 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 10:36:49.500110  809094 kubeadm.go:310] 
	I0929 10:36:49.500183  809094 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token yuv8vz.ch07v7kt1fq2e5wb \
	I0929 10:36:49.500291  809094 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:52ed2ed4898a1f91b8e8d0663d1b9110ba480fd365338f5c4892399f9ccfb123 
	I0929 10:36:49.500305  809094 cni.go:84] Creating CNI manager for ""
	I0929 10:36:49.500322  809094 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 10:36:49.501607  809094 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0929 10:36:49.502541  809094 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0929 10:36:49.513883  809094 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0929 10:36:49.534107  809094 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 10:36:49.534178  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:49.534217  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-529420 minikube.k8s.io/updated_at=2025_09_29T10_36_49_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c1f958e1d15faaa2b94ae7399d1155627e45fcf8 minikube.k8s.io/name=addons-529420 minikube.k8s.io/primary=true
	I0929 10:36:49.634858  809094 ops.go:34] apiserver oom_adj: -16
	I0929 10:36:49.634879  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:50.135279  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:50.635199  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:51.135670  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:51.635820  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:52.135653  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:52.635900  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:53.135655  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:53.635560  809094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:36:53.702327  809094 kubeadm.go:1105] duration metric: took 4.168204298s to wait for elevateKubeSystemPrivileges
	I0929 10:36:53.702374  809094 kubeadm.go:394] duration metric: took 13.795525537s to StartCluster
	I0929 10:36:53.702401  809094 settings.go:142] acquiring lock: {Name:mkba08796c8ac17f1f94d6f1a6799aa6725155e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:53.702549  809094 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21656-804126/kubeconfig
	I0929 10:36:53.703073  809094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/kubeconfig: {Name:mkae1c4acd5fb2ec6cdb06d7e9593ac36dd4ccbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:53.703279  809094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 10:36:53.703300  809094 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 10:36:53.703359  809094 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0929 10:36:53.703546  809094 addons.go:69] Setting ingress=true in profile "addons-529420"
	I0929 10:36:53.703557  809094 addons.go:69] Setting metrics-server=true in profile "addons-529420"
	I0929 10:36:53.703575  809094 addons.go:69] Setting volcano=true in profile "addons-529420"
	I0929 10:36:53.703577  809094 addons.go:238] Setting addon ingress=true in "addons-529420"
	I0929 10:36:53.703583  809094 addons.go:238] Setting addon metrics-server=true in "addons-529420"
	I0929 10:36:53.703600  809094 addons.go:238] Setting addon volcano=true in "addons-529420"
	I0929 10:36:53.703611  809094 config.go:182] Loaded profile config "addons-529420": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 10:36:53.703623  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703615  809094 addons.go:69] Setting storage-provisioner=true in profile "addons-529420"
	I0929 10:36:53.703640  809094 addons.go:69] Setting inspektor-gadget=true in profile "addons-529420"
	I0929 10:36:53.703649  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703650  809094 addons.go:238] Setting addon inspektor-gadget=true in "addons-529420"
	I0929 10:36:53.703670  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703670  809094 addons.go:69] Setting default-storageclass=true in profile "addons-529420"
	I0929 10:36:53.703673  809094 addons.go:238] Setting addon storage-provisioner=true in "addons-529420"
	I0929 10:36:53.703685  809094 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-529420"
	I0929 10:36:53.703665  809094 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-529420"
	I0929 10:36:53.703742  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703752  809094 addons.go:69] Setting registry=true in profile "addons-529420"
	I0929 10:36:53.703784  809094 addons.go:238] Setting addon registry=true in "addons-529420"
	I0929 10:36:53.703797  809094 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-529420"
	I0929 10:36:53.703816  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703839  809094 addons.go:69] Setting registry-creds=true in profile "addons-529420"
	I0929 10:36:53.703853  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703864  809094 addons.go:238] Setting addon registry-creds=true in "addons-529420"
	I0929 10:36:53.703888  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.704091  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704229  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704236  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704241  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704269  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704280  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704369  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704383  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.704822  809094 addons.go:69] Setting gcp-auth=true in profile "addons-529420"
	I0929 10:36:53.704853  809094 mustload.go:65] Loading cluster: addons-529420
	I0929 10:36:53.704946  809094 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-529420"
	I0929 10:36:53.704967  809094 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-529420"
	I0929 10:36:53.705010  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.705063  809094 config.go:182] Loaded profile config "addons-529420": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 10:36:53.705314  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.705514  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.705877  809094 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-529420"
	I0929 10:36:53.705901  809094 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-529420"
	I0929 10:36:53.705927  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.706142  809094 out.go:179] * Verifying Kubernetes components...
	I0929 10:36:53.706320  809094 addons.go:69] Setting volumesnapshots=true in profile "addons-529420"
	I0929 10:36:53.706369  809094 addons.go:238] Setting addon volumesnapshots=true in "addons-529420"
	I0929 10:36:53.706411  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703632  809094 addons.go:69] Setting ingress-dns=true in profile "addons-529420"
	I0929 10:36:53.706720  809094 addons.go:238] Setting addon ingress-dns=true in "addons-529420"
	I0929 10:36:53.706754  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.707040  809094 addons.go:69] Setting cloud-spanner=true in profile "addons-529420"
	I0929 10:36:53.707069  809094 addons.go:238] Setting addon cloud-spanner=true in "addons-529420"
	I0929 10:36:53.707096  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703568  809094 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-529420"
	I0929 10:36:53.707458  809094 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-529420"
	I0929 10:36:53.707552  809094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:36:53.703557  809094 addons.go:69] Setting yakd=true in profile "addons-529420"
	I0929 10:36:53.707960  809094 addons.go:238] Setting addon yakd=true in "addons-529420"
	I0929 10:36:53.707999  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.703623  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.714143  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.714652  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.719013  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.719122  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.719970  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.722035  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.723786  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.778757  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.780491  809094 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0929 10:36:53.781884  809094 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0929 10:36:53.781909  809094 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0929 10:36:53.782011  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.783509  809094 addons.go:238] Setting addon default-storageclass=true in "addons-529420"
	I0929 10:36:53.783669  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.786763  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.807336  809094 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 10:36:53.807376  809094 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0929 10:36:53.807346  809094 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 10:36:53.807406  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0929 10:36:53.808907  809094 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0929 10:36:53.809965  809094 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-529420"
	I0929 10:36:53.810002  809094 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 10:36:53.810022  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 10:36:53.810026  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:36:53.810093  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.809085  809094 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0929 10:36:53.809123  809094 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0929 10:36:53.810358  809094 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0929 10:36:53.810445  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.810589  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:36:53.811657  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0929 10:36:53.811713  809094 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0929 10:36:53.811728  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0929 10:36:53.811729  809094 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 10:36:53.811785  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.812811  809094 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 10:36:53.812830  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0929 10:36:53.812883  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.813727  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0929 10:36:53.814898  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0929 10:36:53.816283  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0929 10:36:53.817528  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0929 10:36:53.818644  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0929 10:36:53.820451  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0929 10:36:53.820547  809094 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0929 10:36:53.820676  809094 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0929 10:36:53.821594  809094 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0929 10:36:53.821612  809094 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0929 10:36:53.821686  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.822262  809094 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 10:36:53.823314  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0929 10:36:53.822343  809094 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 10:36:53.823758  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0929 10:36:53.823888  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.824274  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.826866  809094 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0929 10:36:53.828202  809094 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0929 10:36:53.828229  809094 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0929 10:36:53.828293  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.831468  809094 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0929 10:36:53.831558  809094 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0929 10:36:53.831581  809094 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I0929 10:36:53.831884  809094 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0929 10:36:53.835051  809094 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 10:36:53.835084  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0929 10:36:53.835153  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.836300  809094 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 10:36:53.836321  809094 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 10:36:53.836386  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.836883  809094 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I0929 10:36:53.837031  809094 out.go:179]   - Using image docker.io/registry:3.0.0
	I0929 10:36:53.838461  809094 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0929 10:36:53.838481  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0929 10:36:53.838563  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.839840  809094 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I0929 10:36:53.846729  809094 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0929 10:36:53.847497  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I0929 10:36:53.847543  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.847569  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.862719  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.868937  809094 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0929 10:36:53.870360  809094 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 10:36:53.870448  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0929 10:36:53.870559  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.876206  809094 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 10:36:53.876242  809094 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 10:36:53.879516  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.881669  809094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 10:36:53.897463  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.897986  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.902073  809094 out.go:179]   - Using image docker.io/busybox:stable
	I0929 10:36:53.907027  809094 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0929 10:36:53.908323  809094 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 10:36:53.908508  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0929 10:36:53.908601  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:36:53.912847  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.916620  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.916689  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.918495  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.924984  809094 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 10:36:53.927315  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.928318  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.929791  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.931522  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.931855  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.932316  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	W0929 10:36:53.933912  809094 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0929 10:36:53.933975  809094 retry.go:31] will retry after 222.37842ms: ssh: handshake failed: EOF
	I0929 10:36:53.947593  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:53.951956  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:36:54.017495  809094 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0929 10:36:54.017528  809094 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0929 10:36:54.037524  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0929 10:36:54.047797  809094 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0929 10:36:54.047824  809094 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0929 10:36:54.088850  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 10:36:54.102982  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 10:36:54.104728  809094 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0929 10:36:54.104759  809094 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0929 10:36:54.105824  809094 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:36:54.105847  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0929 10:36:54.113997  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 10:36:54.115206  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 10:36:54.115595  809094 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0929 10:36:54.115658  809094 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0929 10:36:54.115954  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 10:36:54.131289  809094 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0929 10:36:54.131324  809094 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0929 10:36:54.141402  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 10:36:54.141685  809094 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0929 10:36:54.141711  809094 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0929 10:36:54.146347  809094 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 10:36:54.146375  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0929 10:36:54.148830  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0929 10:36:54.158233  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 10:36:54.187390  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:36:54.199350  809094 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0929 10:36:54.199378  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0929 10:36:54.219151  809094 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0929 10:36:54.219235  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0929 10:36:54.228377  809094 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 10:36:54.228502  809094 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 10:36:54.236642  809094 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0929 10:36:54.236676  809094 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0929 10:36:54.249985  809094 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0929 10:36:54.250018  809094 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0929 10:36:54.257015  809094 node_ready.go:35] waiting up to 6m0s for node "addons-529420" to be "Ready" ...
	I0929 10:36:54.257465  809094 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0929 10:36:54.260181  809094 node_ready.go:49] node "addons-529420" is "Ready"
	I0929 10:36:54.260213  809094 node_ready.go:38] duration metric: took 3.156144ms for node "addons-529420" to be "Ready" ...
	I0929 10:36:54.260251  809094 api_server.go:52] waiting for apiserver process to appear ...
	I0929 10:36:54.260304  809094 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 10:36:54.275549  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0929 10:36:54.288410  809094 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 10:36:54.288527  809094 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 10:36:54.309547  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0929 10:36:54.326392  809094 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0929 10:36:54.326462  809094 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0929 10:36:54.371852  809094 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0929 10:36:54.371884  809094 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0929 10:36:54.385903  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 10:36:54.406122  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 10:36:54.441278  809094 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0929 10:36:54.441321  809094 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0929 10:36:54.475876  809094 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0929 10:36:54.475975  809094 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0929 10:36:54.558603  809094 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 10:36:54.558634  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0929 10:36:54.567254  809094 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0929 10:36:54.567363  809094 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0929 10:36:54.637129  809094 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0929 10:36:54.637160  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0929 10:36:54.681921  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 10:36:54.728318  809094 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0929 10:36:54.728360  809094 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0929 10:36:54.766660  809094 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-529420" context rescaled to 1 replicas
	I0929 10:36:54.848450  809094 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0929 10:36:54.848663  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0929 10:36:54.946361  809094 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0929 10:36:54.946523  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0929 10:36:55.079437  809094 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 10:36:55.079471  809094 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0929 10:36:55.190890  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 10:36:55.493984  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.390947237s)
	I0929 10:36:55.494004  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.379973258s)
	I0929 10:36:55.494048  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.378812825s)
	I0929 10:36:56.183759  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.042240387s)
	I0929 10:36:56.184086  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (2.068081762s)
	I0929 10:36:56.184109  809094 addons.go:479] Verifying addon ingress=true in "addons-529420"
	I0929 10:36:56.185647  809094 out.go:179] * Verifying ingress addon...
	I0929 10:36:56.187598  809094 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0929 10:36:56.207003  809094 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0929 10:36:56.207034  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:56.705051  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:57.193166  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:57.222996  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.064717663s)
	I0929 10:36:57.223366  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.035944416s)
	W0929 10:36:57.223400  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:36:57.223437  809094 retry.go:31] will retry after 152.765294ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:36:57.223474  809094 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.963156736s)
	I0929 10:36:57.223491  809094 api_server.go:72] duration metric: took 3.520164967s to wait for apiserver process to appear ...
	I0929 10:36:57.223498  809094 api_server.go:88] waiting for apiserver healthz status ...
	I0929 10:36:57.223518  809094 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0929 10:36:57.223916  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (2.948332459s)
	I0929 10:36:57.223942  809094 addons.go:479] Verifying addon registry=true in "addons-529420"
	I0929 10:36:57.224224  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (2.914632573s)
	I0929 10:36:57.224522  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.838576934s)
	I0929 10:36:57.224572  809094 addons.go:479] Verifying addon metrics-server=true in "addons-529420"
	I0929 10:36:57.224580  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.818313312s)
	I0929 10:36:57.224688  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.542731559s)
	W0929 10:36:57.224716  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 10:36:57.224732  809094 retry.go:31] will retry after 151.241621ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 10:36:57.224940  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.034013059s)
	I0929 10:36:57.224964  809094 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-529420"
	I0929 10:36:57.225804  809094 out.go:179] * Verifying registry addon...
	I0929 10:36:57.225829  809094 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-529420 service yakd-dashboard -n yakd-dashboard
	
	I0929 10:36:57.227287  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.078403021s)
	I0929 10:36:57.228280  809094 out.go:179] * Verifying csi-hostpath-driver addon...
	I0929 10:36:57.229531  809094 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0929 10:36:57.230385  809094 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0929 10:36:57.236158  809094 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0929 10:36:57.238077  809094 api_server.go:141] control plane version: v1.34.0
	I0929 10:36:57.238109  809094 api_server.go:131] duration metric: took 14.602902ms to wait for apiserver health ...
	I0929 10:36:57.238122  809094 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 10:36:57.295055  809094 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 10:36:57.295086  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:36:57.297613  809094 system_pods.go:59] 19 kube-system pods found
	I0929 10:36:57.297672  809094 system_pods.go:61] "amd-gpu-device-plugin-5pr2g" [d246c61e-5b06-4a8a-b834-a8104ed29092] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 10:36:57.297684  809094 system_pods.go:61] "coredns-66bc5c9577-5qvcx" [431a8d9d-251c-4916-9c89-87d909588d39] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:36:57.297700  809094 system_pods.go:61] "csi-hostpath-attacher-0" [70a20e1e-6207-4b51-afa5-e013bc35656f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 10:36:57.297708  809094 system_pods.go:61] "csi-hostpath-resizer-0" [9de479e4-bad8-4f15-bfd7-f7f8a2dd12e6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 10:36:57.297716  809094 system_pods.go:61] "csi-hostpathplugin-z7562" [bb4e1eda-850e-4da2-ae44-f84511d3ee67] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 10:36:57.297725  809094 system_pods.go:61] "etcd-addons-529420" [1b81bfdd-919b-4cd8-a60e-22f79112660f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 10:36:57.297730  809094 system_pods.go:61] "kube-apiserver-addons-529420" [47505dbb-1863-4367-98f4-9f7ebb2c9415] Running
	I0929 10:36:57.297746  809094 system_pods.go:61] "kube-controller-manager-addons-529420" [37496a5f-d392-4a23-b1a5-1c7a43d653a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 10:36:57.297755  809094 system_pods.go:61] "kube-ingress-dns-minikube" [a3dd66da-3ebf-43f5-80b9-e99fc50dfcb6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 10:36:57.297760  809094 system_pods.go:61] "kube-proxy-kxjqx" [99197e14-7d79-403b-87f4-b1f17f52f9f5] Running
	I0929 10:36:57.297767  809094 system_pods.go:61] "kube-scheduler-addons-529420" [ebe74bc8-ca56-4474-99cf-5b6633e8a102] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 10:36:57.297774  809094 system_pods.go:61] "metrics-server-85b7d694d7-6kvbb" [2261b64a-3e18-4fca-8fb4-4a870b5d302f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 10:36:57.297781  809094 system_pods.go:61] "nvidia-device-plugin-daemonset-rv5nz" [70340435-f458-4a2a-a617-185f04893887] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 10:36:57.297791  809094 system_pods.go:61] "registry-66898fdd98-9545s" [6b827e39-0a4c-4539-b408-9bfd1cdda6c1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 10:36:57.297799  809094 system_pods.go:61] "registry-creds-764b6fb674-ssj2n" [a5aecf48-b091-49c9-8743-a9d358c2fcc4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 10:36:57.297807  809094 system_pods.go:61] "registry-proxy-b2bpz" [87ba9cf0-8f09-4423-b674-f77637763bcc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 10:36:57.297814  809094 system_pods.go:61] "snapshot-controller-7d9fbc56b8-bxfh2" [bb19aa00-633c-4f38-88f2-3cd29546aec9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:36:57.297826  809094 system_pods.go:61] "snapshot-controller-7d9fbc56b8-pwx9p" [5203fa8d-6a4b-4bc9-89be-9a58d780e11e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:36:57.297834  809094 system_pods.go:61] "storage-provisioner" [c791ec78-6e64-4b97-a928-e3f1e19e92c1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 10:36:57.297843  809094 system_pods.go:74] duration metric: took 59.712387ms to wait for pod list to return data ...
	I0929 10:36:57.297854  809094 default_sa.go:34] waiting for default service account to be created ...
	I0929 10:36:57.298160  809094 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 10:36:57.298172  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:36:57.305651  809094 default_sa.go:45] found service account: "default"
	I0929 10:36:57.305752  809094 default_sa.go:55] duration metric: took 7.889394ms for default service account to be created ...
	I0929 10:36:57.305777  809094 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 10:36:57.314079  809094 system_pods.go:86] 19 kube-system pods found
	I0929 10:36:57.314192  809094 system_pods.go:89] "amd-gpu-device-plugin-5pr2g" [d246c61e-5b06-4a8a-b834-a8104ed29092] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 10:36:57.314243  809094 system_pods.go:89] "coredns-66bc5c9577-5qvcx" [431a8d9d-251c-4916-9c89-87d909588d39] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:36:57.314277  809094 system_pods.go:89] "csi-hostpath-attacher-0" [70a20e1e-6207-4b51-afa5-e013bc35656f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 10:36:57.314298  809094 system_pods.go:89] "csi-hostpath-resizer-0" [9de479e4-bad8-4f15-bfd7-f7f8a2dd12e6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 10:36:57.314318  809094 system_pods.go:89] "csi-hostpathplugin-z7562" [bb4e1eda-850e-4da2-ae44-f84511d3ee67] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 10:36:57.314350  809094 system_pods.go:89] "etcd-addons-529420" [1b81bfdd-919b-4cd8-a60e-22f79112660f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 10:36:57.314386  809094 system_pods.go:89] "kube-apiserver-addons-529420" [47505dbb-1863-4367-98f4-9f7ebb2c9415] Running
	I0929 10:36:57.314408  809094 system_pods.go:89] "kube-controller-manager-addons-529420" [37496a5f-d392-4a23-b1a5-1c7a43d653a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 10:36:57.314453  809094 system_pods.go:89] "kube-ingress-dns-minikube" [a3dd66da-3ebf-43f5-80b9-e99fc50dfcb6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 10:36:57.314472  809094 system_pods.go:89] "kube-proxy-kxjqx" [99197e14-7d79-403b-87f4-b1f17f52f9f5] Running
	I0929 10:36:57.314489  809094 system_pods.go:89] "kube-scheduler-addons-529420" [ebe74bc8-ca56-4474-99cf-5b6633e8a102] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 10:36:57.314521  809094 system_pods.go:89] "metrics-server-85b7d694d7-6kvbb" [2261b64a-3e18-4fca-8fb4-4a870b5d302f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 10:36:57.314541  809094 system_pods.go:89] "nvidia-device-plugin-daemonset-rv5nz" [70340435-f458-4a2a-a617-185f04893887] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 10:36:57.314558  809094 system_pods.go:89] "registry-66898fdd98-9545s" [6b827e39-0a4c-4539-b408-9bfd1cdda6c1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 10:36:57.314575  809094 system_pods.go:89] "registry-creds-764b6fb674-ssj2n" [a5aecf48-b091-49c9-8743-a9d358c2fcc4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 10:36:57.314623  809094 system_pods.go:89] "registry-proxy-b2bpz" [87ba9cf0-8f09-4423-b674-f77637763bcc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 10:36:57.314641  809094 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bxfh2" [bb19aa00-633c-4f38-88f2-3cd29546aec9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:36:57.314661  809094 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pwx9p" [5203fa8d-6a4b-4bc9-89be-9a58d780e11e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:36:57.314703  809094 system_pods.go:89] "storage-provisioner" [c791ec78-6e64-4b97-a928-e3f1e19e92c1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 10:36:57.314741  809094 system_pods.go:126] duration metric: took 8.926144ms to wait for k8s-apps to be running ...
	I0929 10:36:57.314762  809094 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 10:36:57.314849  809094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 10:36:57.336098  809094 system_svc.go:56] duration metric: took 21.32375ms WaitForService to wait for kubelet
	I0929 10:36:57.336133  809094 kubeadm.go:578] duration metric: took 3.632804576s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 10:36:57.336158  809094 node_conditions.go:102] verifying NodePressure condition ...
	I0929 10:36:57.344338  809094 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 10:36:57.344379  809094 node_conditions.go:123] node cpu capacity is 8
	I0929 10:36:57.344396  809094 node_conditions.go:105] duration metric: took 8.231792ms to run NodePressure ...
	I0929 10:36:57.344410  809094 start.go:241] waiting for startup goroutines ...
	I0929 10:36:57.376412  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 10:36:57.376539  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:36:57.692404  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:57.793184  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:36:57.793318  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:36:58.191147  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:58.233460  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:36:58.233602  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:36:58.692356  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:58.733694  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:36:58.733886  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:36:59.194666  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:59.293196  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:36:59.293196  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:36:59.692198  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:36:59.792340  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:36:59.792540  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:00.126856  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.750359151s)
	I0929 10:37:00.126994  809094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.750412046s)
	W0929 10:37:00.127048  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:00.127077  809094 retry.go:31] will retry after 243.584374ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:00.191955  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:00.292723  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:00.292816  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:00.371587  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:00.692541  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:00.733843  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:00.734315  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:01.147379  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:01.147434  809094 retry.go:31] will retry after 313.436894ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:01.192318  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:01.194800  809094 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0929 10:37:01.194881  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:37:01.217609  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:37:01.269785  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:01.269897  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:01.337246  809094 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0929 10:37:01.363074  809094 addons.go:238] Setting addon gcp-auth=true in "addons-529420"
	I0929 10:37:01.363143  809094 host.go:66] Checking if "addons-529420" exists ...
	I0929 10:37:01.363713  809094 cli_runner.go:164] Run: docker container inspect addons-529420 --format={{.State.Status}}
	I0929 10:37:01.386898  809094 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0929 10:37:01.386956  809094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-529420
	I0929 10:37:01.409699  809094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/addons-529420/id_rsa Username:docker}
	I0929 10:37:01.461472  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:01.691916  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:01.733454  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:01.734086  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:02.191808  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 10:37:02.208498  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:02.208536  809094 retry.go:31] will retry after 1.150273014s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:02.211872  809094 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 10:37:02.213574  809094 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0929 10:37:02.214876  809094 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0929 10:37:02.214902  809094 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0929 10:37:02.234020  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:02.234162  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:02.241298  809094 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0929 10:37:02.241331  809094 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0929 10:37:02.266924  809094 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 10:37:02.266946  809094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0929 10:37:02.293874  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 10:37:02.684619  809094 addons.go:479] Verifying addon gcp-auth=true in "addons-529420"
	I0929 10:37:02.685986  809094 out.go:179] * Verifying gcp-auth addon...
	I0929 10:37:02.688185  809094 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0929 10:37:02.693394  809094 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0929 10:37:02.693428  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:02.693405  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:02.733763  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:02.733885  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:03.192364  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:03.192855  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:03.234316  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:03.234465  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:03.359547  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:03.692021  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:03.692117  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:03.733209  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:03.734038  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:04.191279  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:04.191358  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:04.233692  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:04.234492  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:04.287957  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:04.287997  809094 retry.go:31] will retry after 1.511178059s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:04.692167  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:04.692234  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:04.733131  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:04.734062  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:05.191715  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:05.191921  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:05.233860  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:05.234129  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:05.691361  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:05.691521  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:05.734135  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:05.734258  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:05.800129  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:06.191447  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:06.191552  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:06.233874  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:06.234076  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:06.411195  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:06.411250  809094 retry.go:31] will retry after 2.100064857s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:06.691798  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:06.692073  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:06.732827  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:06.733809  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:07.191508  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:07.191709  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:07.292363  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:07.292457  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:07.691546  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:07.691574  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:07.734030  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:07.734125  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:08.191781  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:08.191814  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:08.232967  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:08.233754  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:08.512112  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:08.691009  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:08.691510  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:08.733861  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:08.733938  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:09.104861  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:09.104897  809094 retry.go:31] will retry after 3.376157006s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:09.191367  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:09.191595  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:09.233494  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:09.233556  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:09.691727  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:09.691798  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:09.733640  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:09.733721  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:10.190922  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:10.191004  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:10.233082  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:10.234008  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:10.691979  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:10.692058  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:10.733220  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:10.733906  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:11.191409  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:11.191511  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:11.234068  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:11.234155  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:11.691573  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:11.691635  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:11.732866  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:11.733594  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:12.191979  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:12.192191  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:12.232888  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:12.233877  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:12.481250  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:12.691033  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:12.691374  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:12.733300  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:12.733527  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:13.054889  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:13.054935  809094 retry.go:31] will retry after 5.523481945s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:13.191285  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:13.191341  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:13.233717  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:13.233802  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:13.691877  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:13.691923  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:13.732931  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:13.733745  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:14.191616  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:14.191752  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:14.233768  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:14.233879  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:14.691856  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:14.691909  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:14.733917  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:14.733984  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:15.191840  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:15.191956  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:15.232857  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:15.233963  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:15.691580  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:15.691731  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:15.733913  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:15.735663  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:16.191071  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:16.191303  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:16.233820  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:16.233918  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:16.691314  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:16.691386  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:16.734056  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:16.734170  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:17.191402  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:17.191593  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:17.233380  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:17.233625  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:17.691493  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:17.691699  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:17.732648  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:37:17.733372  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:18.191911  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:18.192046  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:18.233634  809094 kapi.go:107] duration metric: took 21.004035787s to wait for kubernetes.io/minikube-addons=registry ...
	I0929 10:37:18.234214  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:18.578952  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:18.692732  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:18.693005  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:18.734571  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:19.192457  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:19.192698  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:19.236192  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:19.359657  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:19.359698  809094 retry.go:31] will retry after 5.374303415s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:19.691689  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:19.691870  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:19.734388  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:20.192362  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:20.192436  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:20.235035  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:20.691493  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:20.691559  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:20.734068  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:21.191686  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:21.191754  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:21.234043  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:21.691673  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:21.691816  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:21.734360  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:22.192209  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:22.192364  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:22.234765  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:22.691170  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:22.691296  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:22.734585  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:23.191838  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:23.191855  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:23.234192  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:23.691617  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:23.691827  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:23.734342  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:24.191851  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:24.191914  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:24.234052  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:24.691665  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:24.691766  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:24.734744  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:24.778008  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:25.191224  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:25.191245  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:25.233932  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:25.323222  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:25.323263  809094 retry.go:31] will retry after 4.946418269s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:25.692211  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:25.692261  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:25.793190  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:26.191796  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:26.192044  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:26.234744  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:26.691669  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:26.691730  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:26.792265  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:27.191948  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:27.192054  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:27.293582  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:27.691274  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:27.691324  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:27.734583  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:28.191896  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:28.192035  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:28.234579  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:28.691393  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:28.691468  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:28.745258  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:29.191857  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:29.192145  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:29.293661  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:29.692369  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:29.692409  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:29.734847  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:30.191284  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:30.191549  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:30.235247  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:30.270015  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:30.692450  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:30.692468  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:30.734382  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:30.874687  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:30.874717  809094 retry.go:31] will retry after 9.611404012s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:31.191034  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:31.191136  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:31.234596  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:31.691789  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:31.691851  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:31.734487  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:32.192228  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:32.192397  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:32.234895  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:32.691647  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:32.691826  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:32.734231  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:33.191791  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:33.191803  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:33.293681  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:33.691499  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:33.691575  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:33.733652  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:34.191347  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:34.191394  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:34.235018  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:34.692239  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:34.692288  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:34.734356  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:35.191945  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:35.192042  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:35.233802  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:35.691543  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:35.691708  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:35.733748  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:36.190728  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:36.190927  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:36.234247  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:36.691843  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:36.691958  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:36.734324  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:37.232504  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:37.232580  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:37.234832  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:37.691206  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:37.691326  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:37.734619  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:38.191595  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:38.191680  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:38.292586  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:38.692258  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:38.692407  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:38.734713  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:39.191465  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:39.191541  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:39.233459  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:39.692264  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:39.692361  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:39.736727  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:40.191041  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:40.191241  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:40.282297  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:40.487332  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:37:40.700928  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:40.700982  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:40.734342  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:37:41.111058  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:41.111096  809094 retry.go:31] will retry after 27.798318096s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:37:41.192192  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:41.192448  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:41.234734  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:41.691467  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:41.691559  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:41.738052  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:42.191281  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:42.191334  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:42.234363  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:42.692206  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:42.692364  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:42.735143  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:43.192059  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:43.192131  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:43.234581  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:43.691539  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:43.691592  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:43.734168  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:44.192399  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:44.192554  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:44.235151  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:44.712201  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:44.712256  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:44.814275  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:45.191818  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:45.193367  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:45.292640  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:45.692318  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:45.692523  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:45.734670  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:46.191229  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:46.191310  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:46.234266  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:37:46.691380  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:46.691528  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:46.735585  809094 kapi.go:107] duration metric: took 49.505194902s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0929 10:37:47.191025  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:47.191255  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:47.691635  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:47.691700  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:48.191562  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:48.191703  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:48.692227  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:48.692295  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:49.191449  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:49.191458  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:49.691391  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:49.691392  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:50.191310  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:50.191376  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:50.691312  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:50.691530  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:51.190897  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:51.191071  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:51.690962  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:51.690975  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:52.191596  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:52.191799  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:52.691821  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:52.691860  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:53.192013  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:53.192203  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:53.690815  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:53.691412  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:54.191531  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:54.191563  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:54.690735  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:54.691192  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:55.191955  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:55.192190  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:55.691501  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:55.691567  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:56.191135  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:56.191259  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:56.691523  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:56.691569  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:57.190892  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:57.191196  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:57.691389  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:57.691460  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:58.191486  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:58.191637  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:58.691468  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:58.691615  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:59.191501  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:59.191524  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:37:59.691453  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:37:59.691624  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:00.191269  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:00.191495  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:00.691597  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:00.691645  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:01.192840  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:01.193251  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:01.692293  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:01.692359  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:02.191486  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:02.191506  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:02.691685  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:02.691815  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:03.191877  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:03.191952  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:03.692663  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:03.692768  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:04.191650  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:04.191736  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:04.691141  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:04.691354  809094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:38:05.191572  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:05.191608  809094 kapi.go:107] duration metric: took 1m9.004009852s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0929 10:38:05.692502  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:06.192606  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:06.692131  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:07.191846  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:07.691923  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:08.192050  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:08.691556  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:08.910309  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:38:09.192274  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 10:38:09.505831  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:38:09.505869  809094 retry.go:31] will retry after 26.574045109s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:38:09.692484  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:10.191984  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:10.692204  809094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:38:11.191688  809094 kapi.go:107] duration metric: took 1m8.503599106s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0929 10:38:11.193489  809094 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-529420 cluster.
	I0929 10:38:11.194659  809094 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0929 10:38:11.195892  809094 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0929 10:38:36.081555  809094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0929 10:38:36.665106  809094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 10:38:36.665238  809094 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0929 10:38:36.667274  809094 out.go:179] * Enabled addons: amd-gpu-device-plugin, cloud-spanner, storage-provisioner, nvidia-device-plugin, default-storageclass, registry-creds, metrics-server, ingress-dns, volcano, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0929 10:38:36.668228  809094 addons.go:514] duration metric: took 1m42.964873486s for enable addons: enabled=[amd-gpu-device-plugin cloud-spanner storage-provisioner nvidia-device-plugin default-storageclass registry-creds metrics-server ingress-dns volcano yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0929 10:38:36.668278  809094 start.go:246] waiting for cluster config update ...
	I0929 10:38:36.668307  809094 start.go:255] writing updated cluster config ...
	I0929 10:38:36.668641  809094 ssh_runner.go:195] Run: rm -f paused
	I0929 10:38:36.672803  809094 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 10:38:36.676997  809094 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5qvcx" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:36.681719  809094 pod_ready.go:94] pod "coredns-66bc5c9577-5qvcx" is "Ready"
	I0929 10:38:36.681743  809094 pod_ready.go:86] duration metric: took 4.720124ms for pod "coredns-66bc5c9577-5qvcx" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:36.684044  809094 pod_ready.go:83] waiting for pod "etcd-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:36.688395  809094 pod_ready.go:94] pod "etcd-addons-529420" is "Ready"
	I0929 10:38:36.688443  809094 pod_ready.go:86] duration metric: took 4.375753ms for pod "etcd-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:36.690812  809094 pod_ready.go:83] waiting for pod "kube-apiserver-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:36.694897  809094 pod_ready.go:94] pod "kube-apiserver-addons-529420" is "Ready"
	I0929 10:38:36.694938  809094 pod_ready.go:86] duration metric: took 4.098109ms for pod "kube-apiserver-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:36.696911  809094 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:37.077390  809094 pod_ready.go:94] pod "kube-controller-manager-addons-529420" is "Ready"
	I0929 10:38:37.077434  809094 pod_ready.go:86] duration metric: took 380.496118ms for pod "kube-controller-manager-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:37.276699  809094 pod_ready.go:83] waiting for pod "kube-proxy-kxjqx" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:37.677151  809094 pod_ready.go:94] pod "kube-proxy-kxjqx" is "Ready"
	I0929 10:38:37.677180  809094 pod_ready.go:86] duration metric: took 400.453557ms for pod "kube-proxy-kxjqx" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:37.877980  809094 pod_ready.go:83] waiting for pod "kube-scheduler-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:38.277511  809094 pod_ready.go:94] pod "kube-scheduler-addons-529420" is "Ready"
	I0929 10:38:38.277542  809094 pod_ready.go:86] duration metric: took 399.519909ms for pod "kube-scheduler-addons-529420" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:38:38.277554  809094 pod_ready.go:40] duration metric: took 1.604719848s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 10:38:38.326458  809094 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 10:38:38.327948  809094 out.go:179] * Done! kubectl is now configured to use "addons-529420" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 29 10:45:25 addons-529420 dockerd[1117]: time="2025-09-29T10:45:25.538702810Z" level=info msg="ignoring event" container=e1ac1f8211a256be3f7039c85cf689c705fb84324c983d193bfc2150690f8fff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 10:45:30 addons-529420 cri-dockerd[1422]: time="2025-09-29T10:45:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/352780d153f6a6d715f4edc940400ab9983ae187911b1af4b5f641262963ed4d/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local local europe-west2-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 29 10:45:30 addons-529420 cri-dockerd[1422]: time="2025-09-29T10:45:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bb3744a5c9b15698eedec6834da8902ece3e2b85e6b68c9a6ae4ec8c1365229e/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local local europe-west2-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 29 10:45:31 addons-529420 dockerd[1117]: time="2025-09-29T10:45:31.670453885Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:45:31 addons-529420 cri-dockerd[1422]: time="2025-09-29T10:45:31Z" level=info msg="Stop pulling image docker.io/nginx:latest: latest: Pulling from library/nginx"
	Sep 29 10:45:31 addons-529420 dockerd[1117]: time="2025-09-29T10:45:31.775625035Z" level=info msg="ignoring event" container=acea706acb68dc3a1ba5149b2e8c48925042a208fcd33efb921d974c5c02c09d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 10:45:31 addons-529420 dockerd[1117]: time="2025-09-29T10:45:31.914195053Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 10:45:31 addons-529420 dockerd[1117]: time="2025-09-29T10:45:31.920250744Z" level=info msg="ignoring event" container=7029b34fcc6001658f17a3d72240c9f3b800f570c8dc362684cd016df6a575d6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 10:45:32 addons-529420 dockerd[1117]: time="2025-09-29T10:45:32.386231529Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:45:37 addons-529420 dockerd[1117]: time="2025-09-29T10:45:37.225759698Z" level=info msg="ignoring event" container=3840db368fa915d1003d0f8166ff88362e609fde222cdffacb285e9a65ca7729 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 10:45:37 addons-529420 dockerd[1117]: time="2025-09-29T10:45:37.373360910Z" level=info msg="ignoring event" container=55c14d8df41149ec561ead5fe439f51ae38bcda72d63a51b977bbf2c8833856d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 10:45:38 addons-529420 cri-dockerd[1422]: time="2025-09-29T10:45:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/21b5456ef15610da3a6a307bd6a935e799e758d39fa6f21f52c502945d7e63df/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local local europe-west2-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 29 10:45:39 addons-529420 dockerd[1117]: time="2025-09-29T10:45:39.113535052Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:45:47 addons-529420 dockerd[1117]: time="2025-09-29T10:45:47.710176070Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:45:47 addons-529420 dockerd[1117]: time="2025-09-29T10:45:47.953012291Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 10:45:48 addons-529420 dockerd[1117]: time="2025-09-29T10:45:48.421008315Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:45:54 addons-529420 dockerd[1117]: time="2025-09-29T10:45:54.720481101Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:46:14 addons-529420 dockerd[1117]: time="2025-09-29T10:46:14.765888142Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:46:15 addons-529420 dockerd[1117]: time="2025-09-29T10:46:15.949066104Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 10:46:16 addons-529420 dockerd[1117]: time="2025-09-29T10:46:16.420811680Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:46:18 addons-529420 dockerd[1117]: time="2025-09-29T10:46:18.699889263Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:46:59 addons-529420 dockerd[1117]: time="2025-09-29T10:46:59.700640420Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:47:04 addons-529420 dockerd[1117]: time="2025-09-29T10:47:04.693768710Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:47:05 addons-529420 dockerd[1117]: time="2025-09-29T10:47:05.952262466Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 10:47:06 addons-529420 dockerd[1117]: time="2025-09-29T10:47:06.425910889Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	c663ddd040c70       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          2 minutes ago       Running             busybox                                  0                   75cd2cc2ccb54       busybox
	53f450233c951       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef                             9 minutes ago       Running             controller                               0                   7e7e5a36e33c6       ingress-nginx-controller-9cc49f96f-pxng2
	b8bc330ded02a       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          9 minutes ago       Running             csi-snapshotter                          0                   ebcc1d0ccdc31       csi-hostpathplugin-z7562
	097ec25364497       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          9 minutes ago       Running             csi-provisioner                          0                   ebcc1d0ccdc31       csi-hostpathplugin-z7562
	42c80806f3442       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            9 minutes ago       Running             liveness-probe                           0                   ebcc1d0ccdc31       csi-hostpathplugin-z7562
	deb6678813fad       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           9 minutes ago       Running             hostpath                                 0                   ebcc1d0ccdc31       csi-hostpathplugin-z7562
	4c9a460366a7b       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                9 minutes ago       Running             node-driver-registrar                    0                   ebcc1d0ccdc31       csi-hostpathplugin-z7562
	8ea1f8a967cfc       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   9 minutes ago       Running             csi-external-health-monitor-controller   0                   ebcc1d0ccdc31       csi-hostpathplugin-z7562
	8c797f52e7973       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              9 minutes ago       Running             csi-resizer                              0                   f5028bed08a36       csi-hostpath-resizer-0
	7ed61e7756a57       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             9 minutes ago       Running             csi-attacher                             0                   80edbd2ea7f0f       csi-hostpath-attacher-0
	ac3df2986f815       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      9 minutes ago       Running             volume-snapshot-controller               0                   2f35c5222a676       snapshot-controller-7d9fbc56b8-pwx9p
	cb2fd8f092644       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      9 minutes ago       Running             volume-snapshot-controller               0                   ba080eb06cf32       snapshot-controller-7d9fbc56b8-bxfh2
	2575c5650e053       8c217da6734db                                                                                                                                9 minutes ago       Exited              patch                                    1                   8b7ddb7c14d4d       ingress-nginx-admission-patch-ncp6w
	788b28f41ae4a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   9 minutes ago       Exited              create                                   0                   fe7563f8d917a       ingress-nginx-admission-create-tj9td
	866b95aa367a6       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5                            9 minutes ago       Running             gadget                                   0                   7ee25e2189efc       gadget-8bvtc
	7292e19e493f5       6e38f40d628db                                                                                                                                9 minutes ago       Running             storage-provisioner                      1                   1360822b0ef5c       storage-provisioner
	8cc199db322c9       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       9 minutes ago       Running             local-path-provisioner                   0                   aae77681f23ba       local-path-provisioner-648f6765c9-fg6hp
	88b9438083846       kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89                                         9 minutes ago       Running             minikube-ingress-dns                     0                   33ea5a9962762       kube-ingress-dns-minikube
	910e07bc5455e       6e38f40d628db                                                                                                                                10 minutes ago      Exited              storage-provisioner                      0                   1360822b0ef5c       storage-provisioner
	c491d217e6a98       52546a367cc9e                                                                                                                                10 minutes ago      Running             coredns                                  0                   5f95340f57eb0       coredns-66bc5c9577-5qvcx
	7f229dd48a2c3       df0860106674d                                                                                                                                10 minutes ago      Running             kube-proxy                               0                   6014edb3b0f91       kube-proxy-kxjqx
	cc27cd95c2f3e       90550c43ad2bc                                                                                                                                10 minutes ago      Running             kube-apiserver                           0                   2e57097224cc0       kube-apiserver-addons-529420
	ea3b2c765f2ae       a0af72f2ec6d6                                                                                                                                10 minutes ago      Running             kube-controller-manager                  0                   ce79a900c656a       kube-controller-manager-addons-529420
	7f38046322a2d       46169d968e920                                                                                                                                10 minutes ago      Running             kube-scheduler                           0                   74b0641932447       kube-scheduler-addons-529420
	3a3152518f29b       5f1f5298c888d                                                                                                                                10 minutes ago      Running             etcd                                     0                   099566320ee0a       etcd-addons-529420
	
	
	==> controller_ingress [53f450233c95] <==
	I0929 10:38:04.686863       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"47e79a0f-52ba-4094-b6bf-cff5bc2f5782", APIVersion:"v1", ResourceVersion:"631", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0929 10:38:04.686885       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"7d8260c8-2dc2-42de-8475-3174f0fbcebe", APIVersion:"v1", ResourceVersion:"632", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0929 10:38:05.882258       7 nginx.go:319] "Starting NGINX process"
	I0929 10:38:05.882452       7 leaderelection.go:257] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0929 10:38:05.882700       7 nginx.go:339] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0929 10:38:05.883222       7 controller.go:214] "Configuration changes detected, backend reload required"
	I0929 10:38:05.888729       7 leaderelection.go:271] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0929 10:38:05.888761       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-9cc49f96f-pxng2"
	I0929 10:38:05.892640       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-9cc49f96f-pxng2" node="addons-529420"
	I0929 10:38:05.926797       7 controller.go:228] "Backend successfully reloaded"
	I0929 10:38:05.926911       7 controller.go:240] "Initial sync, sleeping for 1 second"
	I0929 10:38:05.926976       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-9cc49f96f-pxng2", UID:"587b0efc-8a0d-48e9-8f76-f6c72ed2a5f2", APIVersion:"v1", ResourceVersion:"704", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I0929 10:38:05.993545       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-9cc49f96f-pxng2" node="addons-529420"
	W0929 10:45:37.492997       7 controller.go:1126] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0929 10:45:37.494693       7 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I0929 10:45:37.498137       7 store.go:443] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	I0929 10:45:37.498313       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"1675f37f-1842-4388-a104-89352b87f57a", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2589", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0929 10:45:40.520096       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I0929 10:45:40.520746       7 controller.go:214] "Configuration changes detected, backend reload required"
	I0929 10:45:40.560990       7 controller.go:228] "Backend successfully reloaded"
	I0929 10:45:40.561244       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-9cc49f96f-pxng2", UID:"587b0efc-8a0d-48e9-8f76-f6c72ed2a5f2", APIVersion:"v1", ResourceVersion:"704", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0929 10:45:43.853384       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I0929 10:46:05.896671       7 status.go:309] "updating Ingress status" namespace="default" ingress="nginx-ingress" currentValue=null newValue=[{"ip":"192.168.49.2"}]
	I0929 10:46:05.900481       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"1675f37f-1842-4388-a104-89352b87f57a", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2672", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0929 10:46:05.900661       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	
	
	==> coredns [c491d217e6a9] <==
	[INFO] 10.244.0.7:50415 - 40142 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000140179s
	[INFO] 10.244.0.7:51631 - 16547 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000077812s
	[INFO] 10.244.0.7:51631 - 16342 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000121622s
	[INFO] 10.244.0.7:54120 - 55745 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000074894s
	[INFO] 10.244.0.7:54120 - 55427 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.00012144s
	[INFO] 10.244.0.7:44276 - 57260 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000156752s
	[INFO] 10.244.0.7:44276 - 57057 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00019348s
	[INFO] 10.244.0.25:60127 - 41853 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000357536s
	[INFO] 10.244.0.25:49489 - 60455 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000420887s
	[INFO] 10.244.0.25:56444 - 64863 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000123075s
	[INFO] 10.244.0.25:47991 - 45219 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000219459s
	[INFO] 10.244.0.25:41073 - 37798 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000132747s
	[INFO] 10.244.0.25:56023 - 28022 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00018111s
	[INFO] 10.244.0.25:34207 - 60772 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003985247s
	[INFO] 10.244.0.25:34528 - 32480 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.00679976s
	[INFO] 10.244.0.25:57103 - 8380 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.006696106s
	[INFO] 10.244.0.25:58770 - 56546 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.011895734s
	[INFO] 10.244.0.25:46155 - 33827 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004624849s
	[INFO] 10.244.0.25:33222 - 31647 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.008860505s
	[INFO] 10.244.0.25:50865 - 49473 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.0056946s
	[INFO] 10.244.0.25:47833 - 61181 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006586754s
	[INFO] 10.244.0.25:57597 - 20971 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 344 0.000971648s
	[INFO] 10.244.0.25:51886 - 57600 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00104711s
	[INFO] 10.244.0.28:43176 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000337648s
	[INFO] 10.244.0.28:43537 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000209032s
	
	
	==> describe nodes <==
	Name:               addons-529420
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-529420
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c1f958e1d15faaa2b94ae7399d1155627e45fcf8
	                    minikube.k8s.io/name=addons-529420
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T10_36_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-529420
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-529420"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 10:36:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-529420
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 10:47:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 10:45:49 +0000   Mon, 29 Sep 2025 10:36:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 10:45:49 +0000   Mon, 29 Sep 2025 10:36:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 10:45:49 +0000   Mon, 29 Sep 2025 10:36:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 10:45:49 +0000   Mon, 29 Sep 2025 10:36:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-529420
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 3a82221623174cb89aca2e3330c89f2e
	  System UUID:                a576baa7-93e3-43ba-ad25-9862293a3344
	  Boot ID:                    2fed06dd-7ca1-49fc-8cb4-296c71642eea
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (22 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  gadget                      gadget-8bvtc                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-pxng2                      100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-5qvcx                                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 csi-hostpathplugin-z7562                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-addons-529420                                            100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kube-apiserver-addons-529420                                  250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-529420                         200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-kxjqx                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-529420                                  100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 registry-creds-764b6fb674-ssj2n                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 snapshot-controller-7d9fbc56b8-bxfh2                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 snapshot-controller-7d9fbc56b8-pwx9p                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  local-path-storage          helper-pod-create-pvc-dc48f0d2-8814-47d1-a731-5ee587529e41    0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  local-path-storage          local-path-provisioner-648f6765c9-fg6hp                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  yakd-dashboard              yakd-dashboard-5ff678cb9-5thg8                                0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             388Mi (1%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node addons-529420 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node addons-529420 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node addons-529420 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                kubelet          Node addons-529420 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                kubelet          Node addons-529420 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                kubelet          Node addons-529420 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node addons-529420 event: Registered Node addons-529420 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 60 55 0a b3 db 08 06
	[  +0.024494] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 22 ec 8f 40 19 66 08 06
	[  +2.802739] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 72 ba 15 15 c7 ec 08 06
	[  +0.014450] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 9a 38 c6 be 73 e8 08 06
	[  +2.936645] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 02 df 88 08 78 89 08 06
	[  +0.326891] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 b3 ea 3d 15 d3 08 06
	[  +0.292071] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 79 3e 6e 1c 85 08 06
	[Sep29 10:38] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 fd 85 4d 34 7c 08 06
	[  +6.578912] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0a 8d 25 f9 d3 f7 08 06
	[  +0.000514] IPv4: martian source 10.244.0.25 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 79 85 72 eb a1 08 06
	[Sep29 10:45] IPv4: martian source 10.244.0.1 from 10.244.0.28, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 20 b3 46 3c a7 08 06
	[  +0.000504] IPv4: martian source 10.244.0.28 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 79 85 72 eb a1 08 06
	[  +0.000784] IPv4: martian source 10.244.0.28 from 10.244.0.6, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 86 48 7b 6f c2 51 08 06
	
	
	==> etcd [3a3152518f29] <==
	{"level":"warn","ts":"2025-09-29T10:36:45.881854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:45.888653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:45.902856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:45.911144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:45.918112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:45.975495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:36:57.777500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36938","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T10:37:03.895377Z","caller":"traceutil/trace.go:172","msg":"trace[858034751] transaction","detail":"{read_only:false; response_revision:1015; number_of_response:1; }","duration":"128.948339ms","start":"2025-09-29T10:37:03.766408Z","end":"2025-09-29T10:37:03.895357Z","steps":["trace[858034751] 'process raft request'  (duration: 128.809516ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:37:04.039951Z","caller":"traceutil/trace.go:172","msg":"trace[799810606] transaction","detail":"{read_only:false; response_revision:1016; number_of_response:1; }","duration":"139.742546ms","start":"2025-09-29T10:37:03.900189Z","end":"2025-09-29T10:37:04.039932Z","steps":["trace[799810606] 'process raft request'  (duration: 139.624536ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T10:37:23.432233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.465158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.491685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.505493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.513523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.523591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.578347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.609684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.635915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.643629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.654366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:37:23.666394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55308","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T10:38:03.955142Z","caller":"traceutil/trace.go:172","msg":"trace[1495019545] transaction","detail":"{read_only:false; response_revision:1324; number_of_response:1; }","duration":"128.921298ms","start":"2025-09-29T10:38:03.826200Z","end":"2025-09-29T10:38:03.955121Z","steps":["trace[1495019545] 'process raft request'  (duration: 102.9549ms)","trace[1495019545] 'compare'  (duration: 25.853332ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-29T10:46:45.426042Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1847}
	{"level":"info","ts":"2025-09-29T10:46:45.453266Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1847,"took":"26.541323ms","hash":3270274898,"current-db-size-bytes":10637312,"current-db-size":"11 MB","current-db-size-in-use-bytes":6828032,"current-db-size-in-use":"6.8 MB"}
	{"level":"info","ts":"2025-09-29T10:46:45.453313Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3270274898,"revision":1847,"compact-revision":-1}
	
	
	==> kernel <==
	 10:47:23 up  2:29,  0 users,  load average: 0.31, 0.49, 0.80
	Linux addons-529420 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [cc27cd95c2f3] <==
	I0929 10:44:41.792291       1 handler.go:285] Adding GroupVersion topology.volcano.sh v1alpha1 to ResourceManager
	I0929 10:44:41.945334       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0929 10:44:42.053511       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0929 10:44:42.204159       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0929 10:44:42.660617       1 cacher.go:182] Terminating all watchers from cacher commands.bus.volcano.sh
	W0929 10:44:42.777299       1 cacher.go:182] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0929 10:44:42.793473       1 cacher.go:182] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0929 10:44:42.800545       1 cacher.go:182] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0929 10:44:42.861870       1 cacher.go:182] Terminating all watchers from cacher hypernodes.topology.volcano.sh
	W0929 10:44:42.888433       1 cacher.go:182] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0929 10:44:42.939793       1 cacher.go:182] Terminating all watchers from cacher cronjobs.batch.volcano.sh
	W0929 10:44:43.206461       1 cacher.go:182] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	W0929 10:44:43.309112       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	I0929 10:44:45.905061       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0929 10:45:02.075817       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47796: use of closed network connection
	E0929 10:45:02.267143       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47826: use of closed network connection
	I0929 10:45:11.278756       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.35.49"}
	I0929 10:45:32.176305       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:45:37.495617       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0929 10:45:37.674939       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.237.222"}
	I0929 10:45:46.221514       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0929 10:46:01.566752       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:46:34.023075       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:46:46.423056       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 10:47:12.562468       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [ea3b2c765f2a] <==
	E0929 10:46:21.481773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:46:36.378637       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:46:36.379823       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:46:36.822581       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:46:36.823771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:46:36.984439       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:46:36.985651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:46:38.576351       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:46:38.577577       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:46:39.322409       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:46:39.323589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:46:45.099911       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:46:45.101290       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:46:51.465207       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:46:51.466103       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:47:04.913154       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:47:04.914054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:47:09.970227       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:47:09.971254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:47:10.566955       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:47:10.568057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:47:19.079187       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:47:19.080185       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:47:19.611909       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:47:19.612864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [7f229dd48a2c] <==
	I0929 10:36:55.962965       1 server_linux.go:53] "Using iptables proxy"
	I0929 10:36:56.114754       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 10:36:56.216582       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 10:36:56.217225       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 10:36:56.218058       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 10:36:56.276653       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 10:36:56.277056       1 server_linux.go:132] "Using iptables Proxier"
	I0929 10:36:56.332280       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 10:36:56.332751       1 server.go:527] "Version info" version="v1.34.0"
	I0929 10:36:56.332790       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:36:56.334977       1 config.go:200] "Starting service config controller"
	I0929 10:36:56.335003       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 10:36:56.335633       1 config.go:309] "Starting node config controller"
	I0929 10:36:56.335645       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 10:36:56.335652       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 10:36:56.335877       1 config.go:106] "Starting endpoint slice config controller"
	I0929 10:36:56.335887       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 10:36:56.335924       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 10:36:56.335932       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 10:36:56.435153       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 10:36:56.436458       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 10:36:56.436501       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7f38046322a2] <==
	I0929 10:36:46.773756       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0929 10:36:46.774579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 10:36:46.774618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0929 10:36:46.774676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 10:36:46.775018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 10:36:46.775287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 10:36:46.775438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 10:36:46.775611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 10:36:46.775732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 10:36:46.775732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 10:36:46.775766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 10:36:46.776166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 10:36:46.776381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 10:36:46.776394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 10:36:46.776523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 10:36:46.776523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 10:36:46.776556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 10:36:46.776598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 10:36:46.776614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 10:36:46.776631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 10:36:47.607010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 10:36:47.607036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 10:36:47.611460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 10:36:47.673666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I0929 10:36:47.973483       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 10:46:41 addons-529420 kubelet[2313]: E0929 10:46:41.715316    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-dc48f0d2-8814-47d1-a731-5ee587529e41" podUID="c2149918-d355-4cb3-979c-1e4996a2c505"
	Sep 29 10:46:46 addons-529420 kubelet[2313]: E0929 10:46:46.717976    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="f831007e-4eb0-4f0f-b585-d846fca29f0a"
	Sep 29 10:46:50 addons-529420 kubelet[2313]: E0929 10:46:50.721968    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="f53396f1-c1a5-46b0-8089-67b5c438ba7b"
	Sep 29 10:46:54 addons-529420 kubelet[2313]: E0929 10:46:54.715358    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-5thg8" podUID="23d68edd-cead-477c-97b6-f1a7f8b03cbd"
	Sep 29 10:46:54 addons-529420 kubelet[2313]: E0929 10:46:54.715404    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-dc48f0d2-8814-47d1-a731-5ee587529e41" podUID="c2149918-d355-4cb3-979c-1e4996a2c505"
	Sep 29 10:46:59 addons-529420 kubelet[2313]: E0929 10:46:59.703169    2313 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 29 10:46:59 addons-529420 kubelet[2313]: E0929 10:46:59.703226    2313 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 29 10:46:59 addons-529420 kubelet[2313]: E0929 10:46:59.703325    2313 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx_default(f831007e-4eb0-4f0f-b585-d846fca29f0a): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 10:46:59 addons-529420 kubelet[2313]: E0929 10:46:59.703369    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="f831007e-4eb0-4f0f-b585-d846fca29f0a"
	Sep 29 10:47:04 addons-529420 kubelet[2313]: E0929 10:47:04.696200    2313 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 29 10:47:04 addons-529420 kubelet[2313]: E0929 10:47:04.696259    2313 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 29 10:47:04 addons-529420 kubelet[2313]: E0929 10:47:04.696358    2313 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(f53396f1-c1a5-46b0-8089-67b5c438ba7b): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 10:47:04 addons-529420 kubelet[2313]: E0929 10:47:04.696402    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="f53396f1-c1a5-46b0-8089-67b5c438ba7b"
	Sep 29 10:47:06 addons-529420 kubelet[2313]: E0929 10:47:06.428347    2313 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 10:47:06 addons-529420 kubelet[2313]: E0929 10:47:06.428407    2313 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 29 10:47:06 addons-529420 kubelet[2313]: E0929 10:47:06.428573    2313 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-dc48f0d2-8814-47d1-a731-5ee587529e41_local-path-storage(c2149918-d355-4cb3-979c-1e4996a2c505): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 10:47:06 addons-529420 kubelet[2313]: E0929 10:47:06.428608    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-dc48f0d2-8814-47d1-a731-5ee587529e41" podUID="c2149918-d355-4cb3-979c-1e4996a2c505"
	Sep 29 10:47:08 addons-529420 kubelet[2313]: E0929 10:47:08.716531    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-5thg8" podUID="23d68edd-cead-477c-97b6-f1a7f8b03cbd"
	Sep 29 10:47:10 addons-529420 kubelet[2313]: I0929 10:47:10.712660    2313 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 10:47:10 addons-529420 kubelet[2313]: E0929 10:47:10.971618    2313 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Sep 29 10:47:10 addons-529420 kubelet[2313]: E0929 10:47:10.971717    2313 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5aecf48-b091-49c9-8743-a9d358c2fcc4-gcr-creds podName:a5aecf48-b091-49c9-8743-a9d358c2fcc4 nodeName:}" failed. No retries permitted until 2025-09-29 10:49:12.97169967 +0000 UTC m=+744.348833860 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/a5aecf48-b091-49c9-8743-a9d358c2fcc4-gcr-creds") pod "registry-creds-764b6fb674-ssj2n" (UID: "a5aecf48-b091-49c9-8743-a9d358c2fcc4") : secret "registry-creds-gcr" not found
	Sep 29 10:47:11 addons-529420 kubelet[2313]: E0929 10:47:11.714383    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="f831007e-4eb0-4f0f-b585-d846fca29f0a"
	Sep 29 10:47:15 addons-529420 kubelet[2313]: E0929 10:47:15.713207    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="f53396f1-c1a5-46b0-8089-67b5c438ba7b"
	Sep 29 10:47:19 addons-529420 kubelet[2313]: E0929 10:47:19.715868    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-dc48f0d2-8814-47d1-a731-5ee587529e41" podUID="c2149918-d355-4cb3-979c-1e4996a2c505"
	Sep 29 10:47:20 addons-529420 kubelet[2313]: E0929 10:47:20.715022    2313 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-5thg8" podUID="23d68edd-cead-477c-97b6-f1a7f8b03cbd"
	
	
	==> storage-provisioner [7292e19e493f] <==
	W0929 10:46:57.544229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:46:59.547222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:46:59.551822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:01.554829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:01.558529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:03.561412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:03.566504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:05.569812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:05.573303       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:07.576523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:07.581455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:09.585442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:09.590288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:11.593988       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:11.598031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:13.600761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:13.604743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:15.607773       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:15.612922       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:17.616107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:17.622603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:19.625358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:19.629267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:21.632324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:47:21.637202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [910e07bc5455] <==
	I0929 10:36:56.614357       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 10:37:26.626869       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-529420 -n addons-529420
helpers_test.go:269: (dbg) Run:  kubectl --context addons-529420 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-tj9td ingress-nginx-admission-patch-ncp6w registry-creds-764b6fb674-ssj2n helper-pod-create-pvc-dc48f0d2-8814-47d1-a731-5ee587529e41 yakd-dashboard-5ff678cb9-5thg8
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Yakd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-529420 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-tj9td ingress-nginx-admission-patch-ncp6w registry-creds-764b6fb674-ssj2n helper-pod-create-pvc-dc48f0d2-8814-47d1-a731-5ee587529e41 yakd-dashboard-5ff678cb9-5thg8
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-529420 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-tj9td ingress-nginx-admission-patch-ncp6w registry-creds-764b6fb674-ssj2n helper-pod-create-pvc-dc48f0d2-8814-47d1-a731-5ee587529e41 yakd-dashboard-5ff678cb9-5thg8: exit status 1 (92.466942ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-529420/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 10:45:37 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.31
	IPs:
	  IP:  10.244.0.31
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nbt6z (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-nbt6z:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  107s                default-scheduler  Successfully assigned default/nginx to addons-529420
	  Normal   Pulling    26s (x4 over 106s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     25s (x4 over 105s)  kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     25s (x4 over 105s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    1s (x6 over 105s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     1s (x6 over 105s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-529420/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 10:45:29 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xxqpd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-xxqpd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  115s                default-scheduler  Successfully assigned default/task-pv-pod to addons-529420
	  Warning  Failed     113s                kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    21s (x4 over 114s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     20s (x4 over 113s)  kubelet            Error: ErrImagePull
	  Warning  Failed     20s (x3 over 97s)   kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    9s (x6 over 112s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     9s (x6 over 112s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-29hwq (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-29hwq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-tj9td" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-ncp6w" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-ssj2n" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-dc48f0d2-8814-47d1-a731-5ee587529e41" not found
	Error from server (NotFound): pods "yakd-dashboard-5ff678cb9-5thg8" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-529420 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-tj9td ingress-nginx-admission-patch-ncp6w registry-creds-764b6fb674-ssj2n helper-pod-create-pvc-dc48f0d2-8814-47d1-a731-5ee587529e41 yakd-dashboard-5ff678cb9-5thg8: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-529420 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-529420 addons disable yakd --alsologtostderr -v=1: (5.612680073s)
--- FAIL: TestAddons/parallel/Yakd (127.93s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-452443 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-452443 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-452443 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-452443 --alsologtostderr -v=1] stderr:
I0929 10:57:59.384575  874599 out.go:360] Setting OutFile to fd 1 ...
I0929 10:57:59.384882  874599 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:57:59.384893  874599 out.go:374] Setting ErrFile to fd 2...
I0929 10:57:59.384898  874599 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:57:59.385114  874599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-804126/.minikube/bin
I0929 10:57:59.385409  874599 mustload.go:65] Loading cluster: functional-452443
I0929 10:57:59.385788  874599 config.go:182] Loaded profile config "functional-452443": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 10:57:59.386188  874599 cli_runner.go:164] Run: docker container inspect functional-452443 --format={{.State.Status}}
I0929 10:57:59.405062  874599 host.go:66] Checking if "functional-452443" exists ...
I0929 10:57:59.405371  874599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0929 10:57:59.459793  874599 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-29 10:57:59.449378815 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0929 10:57:59.459915  874599 api_server.go:166] Checking apiserver status ...
I0929 10:57:59.459967  874599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0929 10:57:59.460006  874599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-452443
I0929 10:57:59.479933  874599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33529 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/functional-452443/id_rsa Username:docker}
I0929 10:57:59.582972  874599 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/9253/cgroup
W0929 10:57:59.593752  874599 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/9253/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0929 10:57:59.593811  874599 ssh_runner.go:195] Run: ls
I0929 10:57:59.597899  874599 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I0929 10:57:59.602400  874599 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W0929 10:57:59.602474  874599 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I0929 10:57:59.602648  874599 config.go:182] Loaded profile config "functional-452443": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 10:57:59.602667  874599 addons.go:69] Setting dashboard=true in profile "functional-452443"
I0929 10:57:59.602677  874599 addons.go:238] Setting addon dashboard=true in "functional-452443"
I0929 10:57:59.602714  874599 host.go:66] Checking if "functional-452443" exists ...
I0929 10:57:59.603071  874599 cli_runner.go:164] Run: docker container inspect functional-452443 --format={{.State.Status}}
I0929 10:57:59.623817  874599 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0929 10:57:59.625012  874599 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0929 10:57:59.626201  874599 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0929 10:57:59.626233  874599 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0929 10:57:59.626324  874599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-452443
I0929 10:57:59.646186  874599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33529 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/functional-452443/id_rsa Username:docker}
I0929 10:57:59.755815  874599 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0929 10:57:59.755847  874599 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0929 10:57:59.775893  874599 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0929 10:57:59.775917  874599 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0929 10:57:59.795877  874599 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0929 10:57:59.795906  874599 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0929 10:57:59.816824  874599 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0929 10:57:59.816846  874599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0929 10:57:59.837114  874599 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0929 10:57:59.837149  874599 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0929 10:57:59.857796  874599 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0929 10:57:59.857823  874599 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0929 10:57:59.877103  874599 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0929 10:57:59.877133  874599 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0929 10:57:59.897702  874599 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0929 10:57:59.897727  874599 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0929 10:57:59.919480  874599 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0929 10:57:59.919522  874599 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0929 10:57:59.939686  874599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0929 10:58:00.457133  874599 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-452443 addons enable metrics-server

                                                
                                                
I0929 10:58:00.458159  874599 addons.go:201] Writing out "functional-452443" config to set dashboard=true...
W0929 10:58:00.458404  874599 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I0929 10:58:00.459354  874599 kapi.go:59] client config for functional-452443: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21656-804126/.minikube/profiles/functional-452443/client.crt", KeyFile:"/home/jenkins/minikube-integration/21656-804126/.minikube/profiles/functional-452443/client.key", CAFile:"/home/jenkins/minikube-integration/21656-804126/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f41c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0929 10:58:00.459827  874599 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0929 10:58:00.459843  874599 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0929 10:58:00.459849  874599 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0929 10:58:00.459855  874599 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0929 10:58:00.459859  874599 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0929 10:58:00.467352  874599 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  449cd086-18b5-4e38-bc59-54614c388988 860 0 2025-09-29 10:58:00 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-09-29 10:58:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.107.110.72,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.107.110.72],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0929 10:58:00.467565  874599 out.go:285] * Launching proxy ...
* Launching proxy ...
I0929 10:58:00.467643  874599 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-452443 proxy --port 36195]
I0929 10:58:00.467961  874599 dashboard.go:157] Waiting for kubectl to output host:port ...
I0929 10:58:00.514004  874599 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W0929 10:58:00.514071  874599 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I0929 10:58:00.524767  874599 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[216cd950-72d7-4259-bf4f-9b3fe371827b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:58:00 GMT]] Body:0xc000339600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003be640 TLS:<nil>}
I0929 10:58:00.524857  874599 retry.go:31] will retry after 122.039µs: Temporary Error: unexpected response code: 503
I0929 10:58:00.530715  874599 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[33509db4-a720-4469-9f12-a2b0b60baa08] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:58:00 GMT]] Body:0xc0014ff8c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00031adc0 TLS:<nil>}
I0929 10:58:00.530794  874599 retry.go:31] will retry after 196.766µs: Temporary Error: unexpected response code: 503
I0929 10:58:00.534312  874599 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[529280a4-9387-44ee-b867-d6f6f7b5f1c5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:58:00 GMT]] Body:0xc000339740 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000619680 TLS:<nil>}
I0929 10:58:00.534369  874599 retry.go:31] will retry after 271.919µs: Temporary Error: unexpected response code: 503
I0929 10:58:00.537881  874599 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[44930d14-1d00-474e-b685-35aa49e38b67] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:58:00 GMT]] Body:0xc0008ba840 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00031af00 TLS:<nil>}
I0929 10:58:00.537931  874599 retry.go:31] will retry after 196.833µs: Temporary Error: unexpected response code: 503
I0929 10:58:00.541403  874599 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f61cf0e2-bf9c-41ed-b92f-a44d92ab5918] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:58:00 GMT]] Body:0xc0014ffa00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003be780 TLS:<nil>}
I0929 10:58:00.541478  874599 retry.go:31] will retry after 501.985µs: Temporary Error: unexpected response code: 503
I0929 10:58:00.544812  874599 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[63f0aa6e-915c-4aa6-8150-112a31257ae0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:58:00 GMT]] Body:0xc0008ba9c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0006197c0 TLS:<nil>}
I0929 10:58:00.544874  874599 retry.go:31] will retry after 996.779µs: Temporary Error: unexpected response code: 503
I0929 10:58:00.548292  874599 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f60e5c21-edab-4c40-9ef3-740d13df2a79] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:58:00 GMT]] Body:0xc00091db00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003be8c0 TLS:<nil>}
I0929 10:58:00.548342  874599 retry.go:31] will retry after 1.08678ms: Temporary Error: unexpected response code: 503
I0929 10:58:00.551493  874599 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6f8b0ec2-ad52-4256-b842-d203453c032b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:58:00 GMT]] Body:0xc0008bab00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000700f00 TLS:<nil>}
I0929 10:58:00.551554  874599 retry.go:31] will retry after 1.099022ms: Temporary Error: unexpected response code: 503
I0929 10:58:00.555935  874599 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5cfbc034-6164-4e82-baa0-cc0e39cc1677] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:58:00 GMT]] Body:0xc0014ffac0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bef00 TLS:<nil>}
I0929 10:58:00.556005  874599 retry.go:31] will retry after 3.533686ms: Temporary Error: unexpected response code: 503
I0929 10:58:00.562708  874599 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4790881f-98df-4986-bad0-05327b822800] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:58:00 GMT]] Body:0xc00091dc80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000619900 TLS:<nil>}
I0929 10:58:00.562774  874599 retry.go:31] will retry after 3.063629ms: Temporary Error: unexpected response code: 503
I0929 10:58:00.568505  874599 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ff6c4cb3-ceb5-4c8c-bb44-2651023e6b3d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:58:00 GMT]] Body:0xc0014ffb80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000701040 TLS:<nil>}
I0929 10:58:00.568570  874599 retry.go:31] will retry after 8.271669ms: Temporary Error: unexpected response code: 503
I0929 10:58:00.580544  874599 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[11dc93c2-cd6c-4bdd-980c-4fb9594577d8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:58:00 GMT]] Body:0xc0008bac00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000619a40 TLS:<nil>}
I0929 10:58:00.580603  874599 retry.go:31] will retry after 8.146549ms: Temporary Error: unexpected response code: 503
I0929 10:58:00.592781  874599 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e55409f7-3e7c-4e50-b19c-6fe253394245] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:58:00 GMT]] Body:0xc00091ddc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bf2c0 TLS:<nil>}
I0929 10:58:00.592843  874599 retry.go:31] will retry after 18.289087ms: Temporary Error: unexpected response code: 503
I0929 10:58:00.615191  874599 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2de83cec-919e-4fd4-84bf-926306b85c1d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:58:00 GMT]] Body:0xc0014ffcc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000701180 TLS:<nil>}
I0929 10:58:00.615284  874599 retry.go:31] will retry after 28.613638ms: Temporary Error: unexpected response code: 503
I0929 10:58:00.647710  874599 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[31d96a8c-74cc-4b08-b205-044684f6a3d4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:58:00 GMT]] Body:0xc0008bad00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000619b80 TLS:<nil>}
I0929 10:58:00.647777  874599 retry.go:31] will retry after 18.159563ms: Temporary Error: unexpected response code: 503
I0929 10:58:00.670087  874599 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1f92dc9c-4bd6-41c6-aec3-2cf7bfc11b06] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:58:00 GMT]] Body:0xc00091dec0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bf400 TLS:<nil>}
I0929 10:58:00.670156  874599 retry.go:31] will retry after 35.451337ms: Temporary Error: unexpected response code: 503
I0929 10:58:00.709609  874599 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[59dbcbdb-5463-4fe7-a266-d5a25ef2b6ed] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:58:00 GMT]] Body:0xc0014ffdc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000701400 TLS:<nil>}
I0929 10:58:00.709674  874599 retry.go:31] will retry after 86.722818ms: Temporary Error: unexpected response code: 503
I0929 10:58:00.800459  874599 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c0b0cbb3-f4a6-4a4b-b85a-8669ac41163c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:58:00 GMT]] Body:0xc0014ffe80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000619cc0 TLS:<nil>}
I0929 10:58:00.800531  874599 retry.go:31] will retry after 75.534289ms: Temporary Error: unexpected response code: 503
I0929 10:58:00.880358  874599 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a8915ceb-4dad-459f-bca2-a9cf50c3d5b4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:58:00 GMT]] Body:0xc000339880 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000619e00 TLS:<nil>}
I0929 10:58:00.880484  874599 retry.go:31] will retry after 108.175929ms: Temporary Error: unexpected response code: 503
I0929 10:58:00.992903  874599 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[93b08019-46e3-48a2-8adc-0812d206f753] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:58:00 GMT]] Body:0xc0016ec080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00031b040 TLS:<nil>}
I0929 10:58:00.992994  874599 retry.go:31] will retry after 241.399001ms: Temporary Error: unexpected response code: 503
I0929 10:58:01.238680  874599 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f4545d07-06d1-4207-9423-c6afee9a6618] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:58:01 GMT]] Body:0xc0014fff40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000701540 TLS:<nil>}
I0929 10:58:01.238759  874599 retry.go:31] will retry after 467.79565ms: Temporary Error: unexpected response code: 503
I0929 10:58:01.710575  874599 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b2dd4b29-cab0-434f-9ffc-1b32f0c12a71] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:58:01 GMT]] Body:0xc001894080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b7e00 TLS:<nil>}
I0929 10:58:01.710639  874599 retry.go:31] will retry after 661.457176ms: Temporary Error: unexpected response code: 503
I0929 10:58:02.375844  874599 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8c6f504d-6aab-4a3e-9892-2cce09df97cb] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:58:02 GMT]] Body:0xc000339980 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000316000 TLS:<nil>}
I0929 10:58:02.375923  874599 retry.go:31] will retry after 496.086767ms: Temporary Error: unexpected response code: 503
I0929 10:58:02.875887  874599 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[eff1b9cb-a9cd-47ae-aa64-6e6fdd57c9f4] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:58:02 GMT]] Body:0xc001894140 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00031b180 TLS:<nil>}
I0929 10:58:02.875965  874599 retry.go:31] will retry after 883.101627ms: Temporary Error: unexpected response code: 503
I0929 10:58:03.763019  874599 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1b3b78a0-ae7d-412d-ac41-81e5b24aacd5] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:58:03 GMT]] Body:0xc000339a40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000316140 TLS:<nil>}
I0929 10:58:03.763107  874599 retry.go:31] will retry after 1.80032961s: Temporary Error: unexpected response code: 503
I0929 10:58:05.567408  874599 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8568d1b6-9a18-4ea6-921c-baa0da83a0a7] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:58:05 GMT]] Body:0xc000339ac0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000701680 TLS:<nil>}
I0929 10:58:05.567522  874599 retry.go:31] will retry after 3.481882427s: Temporary Error: unexpected response code: 503
I0929 10:58:09.055119  874599 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3fcc51d6-2113-4388-afa9-b3df0a23bc08] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:58:09 GMT]] Body:0xc0016ec300 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00031b2c0 TLS:<nil>}
I0929 10:58:09.055186  874599 retry.go:31] will retry after 4.928992516s: Temporary Error: unexpected response code: 503
I0929 10:58:13.990344  874599 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[20d2f622-6aa7-4e15-9aa1-0c32c0750b2a] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:58:13 GMT]] Body:0xc001894200 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00031b7c0 TLS:<nil>}
I0929 10:58:13.990410  874599 retry.go:31] will retry after 7.954063468s: Temporary Error: unexpected response code: 503
I0929 10:58:21.948483  874599 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[518aa2e0-d19f-404c-b578-184c462dd27c] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:58:21 GMT]] Body:0xc001894280 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00031b900 TLS:<nil>}
I0929 10:58:21.948568  874599 retry.go:31] will retry after 8.991118027s: Temporary Error: unexpected response code: 503
I0929 10:58:30.942798  874599 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1a659da9-6519-4179-bf9c-49900433f1bb] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:58:30 GMT]] Body:0xc000339cc0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000316280 TLS:<nil>}
I0929 10:58:30.942867  874599 retry.go:31] will retry after 6.805431672s: Temporary Error: unexpected response code: 503
I0929 10:58:37.752398  874599 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7d1820d9-f841-4038-923e-5a2a54105690] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:58:37 GMT]] Body:0xc0016ec380 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00031ba40 TLS:<nil>}
I0929 10:58:37.752484  874599 retry.go:31] will retry after 15.702381145s: Temporary Error: unexpected response code: 503
I0929 10:58:53.460624  874599 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a6e9eca9-7b39-4d2a-ae6d-ccb4cd50cd8f] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:58:53 GMT]] Body:0xc001894380 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0007017c0 TLS:<nil>}
I0929 10:58:53.460688  874599 retry.go:31] will retry after 40.380408795s: Temporary Error: unexpected response code: 503
I0929 10:59:33.847497  874599 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[aeff748a-4da0-4237-ab94-b5481577b5ca] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:59:33 GMT]] Body:0xc000339dc0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000701900 TLS:<nil>}
I0929 10:59:33.847569  874599 retry.go:31] will retry after 22.724575176s: Temporary Error: unexpected response code: 503
I0929 10:59:56.575727  874599 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[839fcb7c-ebe2-49bc-9832-494c6a13b763] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 10:59:56 GMT]] Body:0xc0016ec500 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00031bb80 TLS:<nil>}
I0929 10:59:56.575801  874599 retry.go:31] will retry after 1m4.979880487s: Temporary Error: unexpected response code: 503
I0929 11:01:01.559810  874599 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a05f6d14-2a41-48fa-ab40-d7c5124fa4fd] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:01:01 GMT]] Body:0xc000338340 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00031be00 TLS:<nil>}
I0929 11:01:01.559890  874599 retry.go:31] will retry after 54.48394338s: Temporary Error: unexpected response code: 503
I0929 11:01:56.048059  874599 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[061cb030-2708-469e-a76e-ccdf34eb8b55] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:01:56 GMT]] Body:0xc000338580 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000700000 TLS:<nil>}
I0929 11:01:56.048130  874599 retry.go:31] will retry after 1m12.369557962s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-452443
helpers_test.go:243: (dbg) docker inspect functional-452443:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9704eb87e1610fbf8e5d61036532488fba16172de2fc91d3a2019eef75c75e90",
	        "Created": "2025-09-29T10:54:46.577482052Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 856851,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T10:54:46.608065606Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/9704eb87e1610fbf8e5d61036532488fba16172de2fc91d3a2019eef75c75e90/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9704eb87e1610fbf8e5d61036532488fba16172de2fc91d3a2019eef75c75e90/hostname",
	        "HostsPath": "/var/lib/docker/containers/9704eb87e1610fbf8e5d61036532488fba16172de2fc91d3a2019eef75c75e90/hosts",
	        "LogPath": "/var/lib/docker/containers/9704eb87e1610fbf8e5d61036532488fba16172de2fc91d3a2019eef75c75e90/9704eb87e1610fbf8e5d61036532488fba16172de2fc91d3a2019eef75c75e90-json.log",
	        "Name": "/functional-452443",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-452443:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-452443",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9704eb87e1610fbf8e5d61036532488fba16172de2fc91d3a2019eef75c75e90",
	                "LowerDir": "/var/lib/docker/overlay2/d65999e2891604deffd47859a88151201f668d016d11ccd2ee6b155da658ce77-init/diff:/var/lib/docker/overlay2/8785ead96546661033f04e32798b1ac8b184b1c04fc8862515b87d56909b6b9d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d65999e2891604deffd47859a88151201f668d016d11ccd2ee6b155da658ce77/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d65999e2891604deffd47859a88151201f668d016d11ccd2ee6b155da658ce77/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d65999e2891604deffd47859a88151201f668d016d11ccd2ee6b155da658ce77/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-452443",
	                "Source": "/var/lib/docker/volumes/functional-452443/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-452443",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-452443",
	                "name.minikube.sigs.k8s.io": "functional-452443",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cf8f85b7f60f351d095574f692e84ae711dc023f4c840da4e2f2119a4435ef86",
	            "SandboxKey": "/var/run/docker/netns/cf8f85b7f60f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33529"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33530"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33533"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33531"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33532"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-452443": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:c1:98:d2:83:32",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f848f589c6bfb3b7b82d3ac7be21baaedff83a57ab4b9d0973de0715c752022b",
	                    "EndpointID": "a96c3b91746cef9f6bf60a092f5c310e0d47ba93f53e93fd0163a00f798eb508",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-452443",
	                        "9704eb87e161"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-452443 -n functional-452443
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-452443 logs -n 25: (1.029046982s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND   │                                                                            ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh        │ functional-452443 ssh findmnt -T /mount2                                                                                                                   │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ ssh        │ functional-452443 ssh findmnt -T /mount3                                                                                                                   │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ mount      │ -p functional-452443 --kill=true                                                                                                                           │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │                     │
	│ ssh        │ functional-452443 ssh sudo cat /etc/ssl/certs/807725.pem                                                                                                   │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ ssh        │ functional-452443 ssh sudo cat /usr/share/ca-certificates/807725.pem                                                                                       │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ ssh        │ functional-452443 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                   │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ ssh        │ functional-452443 ssh sudo cat /etc/ssl/certs/8077252.pem                                                                                                  │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ ssh        │ functional-452443 ssh sudo cat /usr/share/ca-certificates/8077252.pem                                                                                      │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ ssh        │ functional-452443 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                   │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ docker-env │ functional-452443 docker-env                                                                                                                               │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ docker-env │ functional-452443 docker-env                                                                                                                               │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ license    │                                                                                                                                                            │ minikube          │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ image      │ functional-452443 image load --daemon kicbase/echo-server:functional-452443 --alsologtostderr                                                              │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ image      │ functional-452443 image ls                                                                                                                                 │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ image      │ functional-452443 image load --daemon kicbase/echo-server:functional-452443 --alsologtostderr                                                              │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ image      │ functional-452443 image ls                                                                                                                                 │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ image      │ functional-452443 image load --daemon kicbase/echo-server:functional-452443 --alsologtostderr                                                              │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ image      │ functional-452443 image ls                                                                                                                                 │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ image      │ functional-452443 image save kicbase/echo-server:functional-452443 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ image      │ functional-452443 image rm kicbase/echo-server:functional-452443 --alsologtostderr                                                                         │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ image      │ functional-452443 image ls                                                                                                                                 │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ image      │ functional-452443 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr                                       │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ image      │ functional-452443 image ls                                                                                                                                 │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ image      │ functional-452443 image save --daemon kicbase/echo-server:functional-452443 --alsologtostderr                                                              │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ ssh        │ functional-452443 ssh sudo cat /etc/test/nested/copy/807725/hosts                                                                                          │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	└────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 10:57:59
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 10:57:59.167696  874467 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:57:59.167964  874467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:57:59.167972  874467 out.go:374] Setting ErrFile to fd 2...
	I0929 10:57:59.167977  874467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:57:59.168177  874467 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-804126/.minikube/bin
	I0929 10:57:59.168670  874467 out.go:368] Setting JSON to false
	I0929 10:57:59.169730  874467 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9620,"bootTime":1759133859,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:57:59.169842  874467 start.go:140] virtualization: kvm guest
	I0929 10:57:59.171529  874467 out.go:179] * [functional-452443] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 10:57:59.172657  874467 notify.go:220] Checking for updates...
	I0929 10:57:59.172682  874467 out.go:179]   - MINIKUBE_LOCATION=21656
	I0929 10:57:59.173980  874467 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:57:59.175942  874467 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21656-804126/kubeconfig
	I0929 10:57:59.177379  874467 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-804126/.minikube
	I0929 10:57:59.178525  874467 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 10:57:59.179675  874467 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 10:57:59.181246  874467 config.go:182] Loaded profile config "functional-452443": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 10:57:59.181832  874467 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:57:59.206669  874467 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 10:57:59.206763  874467 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:57:59.264759  874467 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-29 10:57:59.252950382 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:57:59.264876  874467 docker.go:318] overlay module found
	I0929 10:57:59.266521  874467 out.go:179] * Using the docker driver based on existing profile
	I0929 10:57:59.267640  874467 start.go:304] selected driver: docker
	I0929 10:57:59.267664  874467 start.go:924] validating driver "docker" against &{Name:functional-452443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-452443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:57:59.267796  874467 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 10:57:59.267991  874467 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:57:59.327057  874467 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-29 10:57:59.317050779 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:57:59.327826  874467 cni.go:84] Creating CNI manager for ""
	I0929 10:57:59.327901  874467 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 10:57:59.327978  874467 start.go:348] cluster config:
	{Name:functional-452443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-452443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false D
isableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:57:59.329787  874467 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Sep 29 10:58:18 functional-452443 cri-dockerd[7873]: time="2025-09-29T10:58:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6a07e4202214dae641037563ec852212d7fe79b2ac9a2b8958b6200248f185fa/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local local europe-west2-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 29 10:58:19 functional-452443 dockerd[7109]: time="2025-09-29T10:58:19.126332982Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:58:31 functional-452443 dockerd[7109]: time="2025-09-29T10:58:31.251104471Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:58:35 functional-452443 dockerd[7109]: time="2025-09-29T10:58:35.269349958Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:58:36 functional-452443 dockerd[7109]: time="2025-09-29T10:58:36.238748934Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:58:44 functional-452443 dockerd[7109]: time="2025-09-29T10:58:44.509056499Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 10:58:44 functional-452443 dockerd[7109]: time="2025-09-29T10:58:44.984277219Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:58:45 functional-452443 dockerd[7109]: time="2025-09-29T10:58:45.495587918Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 10:58:45 functional-452443 dockerd[7109]: time="2025-09-29T10:58:45.964603763Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:59:02 functional-452443 dockerd[7109]: time="2025-09-29T10:59:02.238485360Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:59:12 functional-452443 dockerd[7109]: time="2025-09-29T10:59:12.249078095Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:59:27 functional-452443 dockerd[7109]: time="2025-09-29T10:59:27.504186417Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 10:59:27 functional-452443 dockerd[7109]: time="2025-09-29T10:59:27.978297937Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:59:30 functional-452443 dockerd[7109]: time="2025-09-29T10:59:30.245673637Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:59:36 functional-452443 dockerd[7109]: time="2025-09-29T10:59:36.496707702Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 10:59:36 functional-452443 dockerd[7109]: time="2025-09-29T10:59:36.972803892Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:59:46 functional-452443 dockerd[7109]: time="2025-09-29T10:59:46.252707998Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:00:33 functional-452443 dockerd[7109]: time="2025-09-29T11:00:33.547993670Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:00:33 functional-452443 cri-dockerd[7873]: time="2025-09-29T11:00:33Z" level=info msg="Stop pulling image docker.io/nginx:alpine: alpine: Pulling from library/nginx"
	Sep 29 11:00:56 functional-452443 dockerd[7109]: time="2025-09-29T11:00:56.504053059Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 11:00:56 functional-452443 dockerd[7109]: time="2025-09-29T11:00:56.983321000Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:00:57 functional-452443 dockerd[7109]: time="2025-09-29T11:00:57.495553292Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 11:00:57 functional-452443 dockerd[7109]: time="2025-09-29T11:00:57.967045366Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:01:02 functional-452443 dockerd[7109]: time="2025-09-29T11:01:02.264921601Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:01:11 functional-452443 dockerd[7109]: time="2025-09-29T11:01:11.251854890Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2820261789a08       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 minutes ago       Exited              mount-munger              0                   f49c812ff73af       busybox-mount
	2b61657715b0f       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           5 minutes ago       Running             echo-server               0                   cf9a947b36ef3       hello-node-75c85bcc94-6dswg
	519d25585d9d8       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           5 minutes ago       Running             echo-server               0                   4c82b414cd2d4       hello-node-connect-7d85dfc575-q79gk
	0602021c398f2       52546a367cc9e                                                                                         5 minutes ago       Running             coredns                   2                   f74afc847e154       coredns-66bc5c9577-5t9hd
	03abdd4cbaff4       6e38f40d628db                                                                                         5 minutes ago       Running             storage-provisioner       2                   bf55a1c2fa8a4       storage-provisioner
	43f526c4f63e3       df0860106674d                                                                                         5 minutes ago       Running             kube-proxy                3                   79b5b939cb7a7       kube-proxy-pxp5b
	992f09f326360       46169d968e920                                                                                         5 minutes ago       Running             kube-scheduler            3                   323d69d75a82f       kube-scheduler-functional-452443
	2af4e44854764       5f1f5298c888d                                                                                         5 minutes ago       Running             etcd                      2                   7df074a2758e8       etcd-functional-452443
	b78af3703b194       a0af72f2ec6d6                                                                                         5 minutes ago       Running             kube-controller-manager   2                   935d1cdbad833       kube-controller-manager-functional-452443
	26e494d7e0a5c       90550c43ad2bc                                                                                         5 minutes ago       Running             kube-apiserver            0                   ed1565ba1327c       kube-apiserver-functional-452443
	0ebdc90c47881       46169d968e920                                                                                         5 minutes ago       Exited              kube-scheduler            2                   16bcf253eb6dd       kube-scheduler-functional-452443
	50fd35bd47b61       df0860106674d                                                                                         5 minutes ago       Exited              kube-proxy                2                   4f6668af3d403       kube-proxy-pxp5b
	294c14d825024       6e38f40d628db                                                                                         6 minutes ago       Exited              storage-provisioner       1                   716b04da95858       storage-provisioner
	812ef773bb7b8       5f1f5298c888d                                                                                         6 minutes ago       Exited              etcd                      1                   a643879a94586       etcd-functional-452443
	f3cc3bf7eecff       a0af72f2ec6d6                                                                                         6 minutes ago       Exited              kube-controller-manager   1                   9cdfe23701ac1       kube-controller-manager-functional-452443
	69b6d4a058709       52546a367cc9e                                                                                         6 minutes ago       Exited              coredns                   1                   235e3d5e1d777       coredns-66bc5c9577-5t9hd
	
	
	==> coredns [0602021c398f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49182 - 5842 "HINFO IN 5293055615066822950.8147071532598776670. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.029161007s
	
	
	==> coredns [69b6d4a05870] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50093 - 2427 "HINFO IN 5085816626060849129.4149907659589998759. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021678816s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-452443
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-452443
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c1f958e1d15faaa2b94ae7399d1155627e45fcf8
	                    minikube.k8s.io/name=functional-452443
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T10_55_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 10:55:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-452443
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 11:03:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 10:58:24 +0000   Mon, 29 Sep 2025 10:54:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 10:58:24 +0000   Mon, 29 Sep 2025 10:54:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 10:58:24 +0000   Mon, 29 Sep 2025 10:54:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 10:58:24 +0000   Mon, 29 Sep 2025 10:55:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-452443
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 1adfbe2ba45d45b69072dff918b31afa
	  System UUID:                555a974c-1461-41c6-a7c0-fcd5a6458b4b
	  Boot ID:                    2fed06dd-7ca1-49fc-8cb4-296c71642eea
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-6dswg                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m15s
	  default                     hello-node-connect-7d85dfc575-q79gk           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m16s
	  default                     mysql-5bb876957f-vwljh                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     4m43s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m16s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 coredns-66bc5c9577-5t9hd                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m52s
	  kube-system                 etcd-functional-452443                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m58s
	  kube-system                 kube-apiserver-functional-452443              250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m36s
	  kube-system                 kube-controller-manager-functional-452443     200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m59s
	  kube-system                 kube-proxy-pxp5b                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m52s
	  kube-system                 kube-scheduler-functional-452443              100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m58s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m51s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-2cf6d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-wsddv         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%)  700m (8%)
	  memory             682Mi (2%)   870Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m51s                  kube-proxy       
	  Normal  Starting                 5m35s                  kube-proxy       
	  Normal  Starting                 6m32s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  7m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m58s                  kubelet          Node functional-452443 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m58s                  kubelet          Node functional-452443 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m58s                  kubelet          Node functional-452443 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m58s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m53s                  node-controller  Node functional-452443 event: Registered Node functional-452443 in Controller
	  Normal  RegisteredNode           6m33s                  node-controller  Node functional-452443 event: Registered Node functional-452443 in Controller
	  Normal  Starting                 5m40s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m40s (x8 over 5m40s)  kubelet          Node functional-452443 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m40s (x8 over 5m40s)  kubelet          Node functional-452443 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m40s (x7 over 5m40s)  kubelet          Node functional-452443 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m34s                  node-controller  Node functional-452443 event: Registered Node functional-452443 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 02 df 88 08 78 89 08 06
	[  +0.326891] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 b3 ea 3d 15 d3 08 06
	[  +0.292071] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 79 3e 6e 1c 85 08 06
	[Sep29 10:38] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 fd 85 4d 34 7c 08 06
	[  +6.578912] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0a 8d 25 f9 d3 f7 08 06
	[  +0.000514] IPv4: martian source 10.244.0.25 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 79 85 72 eb a1 08 06
	[Sep29 10:45] IPv4: martian source 10.244.0.1 from 10.244.0.28, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 20 b3 46 3c a7 08 06
	[  +0.000504] IPv4: martian source 10.244.0.28 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 79 85 72 eb a1 08 06
	[  +0.000784] IPv4: martian source 10.244.0.28 from 10.244.0.6, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 86 48 7b 6f c2 51 08 06
	[Sep29 10:55] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 29 b2 98 63 f0 08 06
	[  +0.000053] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ee 0c 04 f0 eb c2 08 06
	[Sep29 10:56] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 80 4d 08 0b ed 08 06
	[Sep29 10:57] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1a fe 75 57 99 97 08 06
	
	
	==> etcd [2af4e4485476] <==
	{"level":"warn","ts":"2025-09-29T10:57:22.722736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.731453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.737925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.747753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.753900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.760147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.767006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.772906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.779636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.786632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.792923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.799227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.805607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.811797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.818041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.824481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.831879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.838301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.845005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.851319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.857432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.873458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.880738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.888634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.942452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48558","server-name":"","error":"EOF"}
	
	
	==> etcd [812ef773bb7b] <==
	{"level":"warn","ts":"2025-09-29T10:56:23.959078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:56:23.972590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:56:23.979257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:56:23.985825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:56:23.998562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:56:24.010682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:56:24.053788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46968","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T10:57:06.088584Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T10:57:06.088717Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-452443","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-29T10:57:06.088843Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T10:57:13.091154Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T10:57:13.091298Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-29T10:57:13.091392Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T10:57:13.091484Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-09-29T10:57:13.091458Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"error","ts":"2025-09-29T10:57:13.091495Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-29T10:57:13.091509Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T10:57:13.091545Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T10:57:13.091556Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T10:57:13.091560Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-29T10:57:13.091579Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-09-29T10:57:13.094068Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-29T10:57:13.094123Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T10:57:13.094153Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-29T10:57:13.094166Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-452443","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 11:03:00 up  2:45,  0 users,  load average: 0.15, 0.32, 0.52
	Linux functional-452443 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [26e494d7e0a5] <==
	I0929 10:57:24.352650       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W0929 10:57:24.613872       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0929 10:57:24.615872       1 controller.go:667] quota admission added evaluator for: endpoints
	I0929 10:57:24.960119       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0929 10:57:24.994220       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0929 10:57:25.018075       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0929 10:57:25.025444       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0929 10:57:27.081844       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0929 10:57:27.130221       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0929 10:57:39.595785       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.105.35.56"}
	I0929 10:57:44.645978       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.98.184.164"}
	I0929 10:57:44.672462       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.107.3.82"}
	I0929 10:57:45.655783       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.109.27.140"}
	I0929 10:58:00.303776       1 controller.go:667] quota admission added evaluator for: namespaces
	I0929 10:58:00.433906       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.110.72"}
	I0929 10:58:00.448384       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.2.211"}
	I0929 10:58:17.582425       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.108.119.205"}
	I0929 10:58:41.256974       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:58:50.647048       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:59:41.307768       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:00:16.165748       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:00:56.805471       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:01:41.587948       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:02:06.639953       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:02:51.141345       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [b78af3703b19] <==
	I0929 10:57:26.727219       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0929 10:57:26.727279       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0929 10:57:26.728038       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0929 10:57:26.728325       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0929 10:57:26.729228       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0929 10:57:26.729271       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0929 10:57:26.732345       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 10:57:26.734684       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0929 10:57:26.735866       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 10:57:26.739928       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0929 10:57:26.742296       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0929 10:57:26.742395       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 10:57:26.742501       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-452443"
	I0929 10:57:26.742552       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 10:57:26.744811       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0929 10:57:26.747472       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0929 10:57:26.750035       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 10:57:26.755446       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0929 10:58:00.364718       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:58:00.371620       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:58:00.374614       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:58:00.376184       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:58:00.379516       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:58:00.388773       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:58:00.389833       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [f3cc3bf7eecf] <==
	I0929 10:56:27.853499       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0929 10:56:27.855898       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0929 10:56:27.858171       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0929 10:56:27.860467       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 10:56:27.882000       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 10:56:27.882054       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0929 10:56:27.882108       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0929 10:56:27.882136       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0929 10:56:27.882139       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0929 10:56:27.882252       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 10:56:27.882285       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0929 10:56:27.882348       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0929 10:56:27.882410       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0929 10:56:27.882582       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 10:56:27.882699       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-452443"
	I0929 10:56:27.882782       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 10:56:27.884650       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0929 10:56:27.886398       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0929 10:56:27.888962       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 10:56:27.889215       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 10:56:27.892353       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 10:56:27.913736       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 10:56:27.913764       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 10:56:27.913765       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 10:56:27.913774       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [43f526c4f63e] <==
	I0929 10:57:24.726005       1 server_linux.go:53] "Using iptables proxy"
	I0929 10:57:24.788536       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 10:57:24.889125       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 10:57:24.889190       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 10:57:24.889318       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 10:57:24.919481       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 10:57:24.919547       1 server_linux.go:132] "Using iptables Proxier"
	I0929 10:57:24.927324       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 10:57:24.928652       1 server.go:527] "Version info" version="v1.34.0"
	I0929 10:57:24.929506       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:57:24.932342       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 10:57:24.932363       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 10:57:24.932388       1 config.go:200] "Starting service config controller"
	I0929 10:57:24.932395       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 10:57:24.932439       1 config.go:106] "Starting endpoint slice config controller"
	I0929 10:57:24.932445       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 10:57:24.932488       1 config.go:309] "Starting node config controller"
	I0929 10:57:24.932507       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 10:57:25.032561       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 10:57:25.032594       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 10:57:25.032610       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 10:57:25.032610       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [50fd35bd47b6] <==
	I0929 10:57:18.304680       1 server_linux.go:53] "Using iptables proxy"
	I0929 10:57:18.365710       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0929 10:57:18.366737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-452443&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-scheduler [0ebdc90c4788] <==
	I0929 10:57:18.942790       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [992f09f32636] <==
	I0929 10:57:21.365688       1 serving.go:386] Generated self-signed cert in-memory
	W0929 10:57:23.324838       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 10:57:23.324961       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 10:57:23.324996       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 10:57:23.325042       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 10:57:23.350031       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 10:57:23.350072       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:57:23.352236       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:57:23.352279       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:57:23.352567       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 10:57:23.352863       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 10:57:23.453195       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 11:01:54 functional-452443 kubelet[8829]: E0929 11:01:54.263519    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-2cf6d" podUID="e461b8e2-b1f4-4815-88e2-19e37959ef47"
	Sep 29 11:01:56 functional-452443 kubelet[8829]: E0929 11:01:56.260975    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="89090c16-9901-49ad-961f-7eccda79d3e3"
	Sep 29 11:01:57 functional-452443 kubelet[8829]: E0929 11:01:57.262384    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="7db0e382-55c4-415d-8eab-de12640ee91a"
	Sep 29 11:02:00 functional-452443 kubelet[8829]: E0929 11:02:00.263627    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wsddv" podUID="a3e360df-cf29-4d99-91d6-aaae27aa6cfe"
	Sep 29 11:02:01 functional-452443 kubelet[8829]: E0929 11:02:01.262356    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-vwljh" podUID="7b01fd21-a3ab-4ad8-a320-deba03206503"
	Sep 29 11:02:07 functional-452443 kubelet[8829]: E0929 11:02:07.262611    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-2cf6d" podUID="e461b8e2-b1f4-4815-88e2-19e37959ef47"
	Sep 29 11:02:09 functional-452443 kubelet[8829]: E0929 11:02:09.260093    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="89090c16-9901-49ad-961f-7eccda79d3e3"
	Sep 29 11:02:12 functional-452443 kubelet[8829]: E0929 11:02:12.262504    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="7db0e382-55c4-415d-8eab-de12640ee91a"
	Sep 29 11:02:13 functional-452443 kubelet[8829]: E0929 11:02:13.262681    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wsddv" podUID="a3e360df-cf29-4d99-91d6-aaae27aa6cfe"
	Sep 29 11:02:14 functional-452443 kubelet[8829]: E0929 11:02:14.262979    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-vwljh" podUID="7b01fd21-a3ab-4ad8-a320-deba03206503"
	Sep 29 11:02:21 functional-452443 kubelet[8829]: E0929 11:02:21.260328    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="89090c16-9901-49ad-961f-7eccda79d3e3"
	Sep 29 11:02:21 functional-452443 kubelet[8829]: E0929 11:02:21.262331    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-2cf6d" podUID="e461b8e2-b1f4-4815-88e2-19e37959ef47"
	Sep 29 11:02:25 functional-452443 kubelet[8829]: E0929 11:02:25.262324    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="7db0e382-55c4-415d-8eab-de12640ee91a"
	Sep 29 11:02:26 functional-452443 kubelet[8829]: E0929 11:02:26.262866    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-vwljh" podUID="7b01fd21-a3ab-4ad8-a320-deba03206503"
	Sep 29 11:02:28 functional-452443 kubelet[8829]: E0929 11:02:28.262655    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wsddv" podUID="a3e360df-cf29-4d99-91d6-aaae27aa6cfe"
	Sep 29 11:02:33 functional-452443 kubelet[8829]: E0929 11:02:33.260348    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="89090c16-9901-49ad-961f-7eccda79d3e3"
	Sep 29 11:02:34 functional-452443 kubelet[8829]: E0929 11:02:34.262935    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-2cf6d" podUID="e461b8e2-b1f4-4815-88e2-19e37959ef47"
	Sep 29 11:02:37 functional-452443 kubelet[8829]: E0929 11:02:37.262553    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-vwljh" podUID="7b01fd21-a3ab-4ad8-a320-deba03206503"
	Sep 29 11:02:40 functional-452443 kubelet[8829]: E0929 11:02:40.263383    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="7db0e382-55c4-415d-8eab-de12640ee91a"
	Sep 29 11:02:42 functional-452443 kubelet[8829]: E0929 11:02:42.262892    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wsddv" podUID="a3e360df-cf29-4d99-91d6-aaae27aa6cfe"
	Sep 29 11:02:47 functional-452443 kubelet[8829]: E0929 11:02:47.260390    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="89090c16-9901-49ad-961f-7eccda79d3e3"
	Sep 29 11:02:48 functional-452443 kubelet[8829]: E0929 11:02:48.263146    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-2cf6d" podUID="e461b8e2-b1f4-4815-88e2-19e37959ef47"
	Sep 29 11:02:52 functional-452443 kubelet[8829]: E0929 11:02:52.262609    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="7db0e382-55c4-415d-8eab-de12640ee91a"
	Sep 29 11:02:52 functional-452443 kubelet[8829]: E0929 11:02:52.262690    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-vwljh" podUID="7b01fd21-a3ab-4ad8-a320-deba03206503"
	Sep 29 11:02:57 functional-452443 kubelet[8829]: E0929 11:02:57.263179    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wsddv" podUID="a3e360df-cf29-4d99-91d6-aaae27aa6cfe"
	
	
	==> storage-provisioner [03abdd4cbaff] <==
	W0929 11:02:35.427473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:02:37.430856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:02:37.434957       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:02:39.438403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:02:39.443505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:02:41.446952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:02:41.452217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:02:43.455219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:02:43.459592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:02:45.463357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:02:45.468019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:02:47.471689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:02:47.475605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:02:49.478791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:02:49.483184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:02:51.486875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:02:51.491128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:02:53.494501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:02:53.498890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:02:55.502838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:02:55.507322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:02:57.510685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:02:57.514677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:02:59.518752       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:02:59.523042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [294c14d82502] <==
	I0929 10:56:45.320881       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4a43b7f2-61d2-46e8-9782-7b3e373f54d4", APIVersion:"v1", ResourceVersion:"558", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-452443_682bbbcc-ebb4-4ad2-8adb-3ba01048abeb became leader
	I0929 10:56:45.320924       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-452443_682bbbcc-ebb4-4ad2-8adb-3ba01048abeb!
	W0929 10:56:45.323253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:45.327714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0929 10:56:45.421684       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-452443_682bbbcc-ebb4-4ad2-8adb-3ba01048abeb!
	W0929 10:56:47.330849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:47.335388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:49.339584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:49.345321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:51.348084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:51.352718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:53.356066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:53.360167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:55.363307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:55.367705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:57.370570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:57.375034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:59.378481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:59.383711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:57:01.386943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:57:01.391270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:57:03.394081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:57:03.398268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:57:05.402241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:57:05.406289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-452443 -n functional-452443
helpers_test.go:269: (dbg) Run:  kubectl --context functional-452443 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-5bb876957f-vwljh nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-2cf6d kubernetes-dashboard-855c9754f9-wsddv
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-452443 describe pod busybox-mount mysql-5bb876957f-vwljh nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-2cf6d kubernetes-dashboard-855c9754f9-wsddv
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-452443 describe pod busybox-mount mysql-5bb876957f-vwljh nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-2cf6d kubernetes-dashboard-855c9754f9-wsddv: exit status 1 (99.175871ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-452443/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 10:57:57 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://2820261789a088603ce8063bfe2c7aaf645df80c99e482409fafdaac2ce2bdd1
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 29 Sep 2025 10:58:00 +0000
	      Finished:     Mon, 29 Sep 2025 10:58:00 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xpfzv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-xpfzv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  5m4s  default-scheduler  Successfully assigned default/busybox-mount to functional-452443
	  Normal  Pulling    5m3s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m1s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.873s (1.873s including waiting). Image size: 4403845 bytes.
	  Normal  Created    5m1s  kubelet            Created container: mount-munger
	  Normal  Started    5m1s  kubelet            Started container mount-munger
	
	
	Name:             mysql-5bb876957f-vwljh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-452443/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 10:58:17 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.14
	IPs:
	  IP:           10.244.0.14
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f9cxl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-f9cxl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  4m44s                 default-scheduler  Successfully assigned default/mysql-5bb876957f-vwljh to functional-452443
	  Normal   Pulling    111s (x5 over 4m43s)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     110s (x5 over 4m42s)  kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     110s (x5 over 4m42s)  kubelet            Error: ErrImagePull
	  Warning  Failed     35s (x15 over 4m41s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    9s (x17 over 4m41s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-452443/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 10:57:44 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4pqtp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-4pqtp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m17s                  default-scheduler  Successfully assigned default/nginx-svc to functional-452443
	  Warning  Failed     3m49s (x3 over 4m57s)  kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m29s (x5 over 5m16s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2m28s (x2 over 5m13s)  kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m28s (x5 over 5m13s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    9s (x20 over 5m13s)    kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     9s (x20 over 5m13s)    kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-452443/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 10:57:50 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xcvcq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-xcvcq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  5m11s                 default-scheduler  Successfully assigned default/sp-pod to functional-452443
	  Normal   Pulling    2m (x5 over 5m11s)    kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     119s (x5 over 5m10s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     119s (x5 over 5m10s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    0s (x20 over 5m9s)    kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     0s (x20 over 5m9s)    kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-2cf6d" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-wsddv" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-452443 describe pod busybox-mount mysql-5bb876957f-vwljh nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-2cf6d kubernetes-dashboard-855c9754f9-wsddv: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.00s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (368.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [f3fe4c87-e336-4afa-b77e-fd665c4cfa3b] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00440605s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-452443 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-452443 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-452443 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-452443 apply -f testdata/storage-provisioner/pod.yaml
I0929 10:57:50.466269  807725 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [89090c16-9901-49ad-961f-7eccda79d3e3] Pending
helpers_test.go:352: "sp-pod" [89090c16-9901-49ad-961f-7eccda79d3e3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-452443 -n functional-452443
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-09-29 11:03:50.783896519 +0000 UTC m=+1684.619815337
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-452443 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-452443 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-452443/192.168.49.2
Start Time:       Mon, 29 Sep 2025 10:57:50 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:  10.244.0.10
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xcvcq (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-xcvcq:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  6m                     default-scheduler  Successfully assigned default/sp-pod to functional-452443
Normal   Pulling    2m49s (x5 over 6m)     kubelet            Pulling image "docker.io/nginx"
Warning  Failed     2m48s (x5 over 5m59s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     2m48s (x5 over 5m59s)  kubelet            Error: ErrImagePull
Warning  Failed     49s (x20 over 5m58s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    38s (x21 over 5m58s)   kubelet            Back-off pulling image "docker.io/nginx"
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-452443 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-452443 logs sp-pod -n default: exit status 1 (73.425457ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-452443 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-452443
helpers_test.go:243: (dbg) docker inspect functional-452443:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9704eb87e1610fbf8e5d61036532488fba16172de2fc91d3a2019eef75c75e90",
	        "Created": "2025-09-29T10:54:46.577482052Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 856851,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T10:54:46.608065606Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/9704eb87e1610fbf8e5d61036532488fba16172de2fc91d3a2019eef75c75e90/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9704eb87e1610fbf8e5d61036532488fba16172de2fc91d3a2019eef75c75e90/hostname",
	        "HostsPath": "/var/lib/docker/containers/9704eb87e1610fbf8e5d61036532488fba16172de2fc91d3a2019eef75c75e90/hosts",
	        "LogPath": "/var/lib/docker/containers/9704eb87e1610fbf8e5d61036532488fba16172de2fc91d3a2019eef75c75e90/9704eb87e1610fbf8e5d61036532488fba16172de2fc91d3a2019eef75c75e90-json.log",
	        "Name": "/functional-452443",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-452443:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-452443",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9704eb87e1610fbf8e5d61036532488fba16172de2fc91d3a2019eef75c75e90",
	                "LowerDir": "/var/lib/docker/overlay2/d65999e2891604deffd47859a88151201f668d016d11ccd2ee6b155da658ce77-init/diff:/var/lib/docker/overlay2/8785ead96546661033f04e32798b1ac8b184b1c04fc8862515b87d56909b6b9d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d65999e2891604deffd47859a88151201f668d016d11ccd2ee6b155da658ce77/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d65999e2891604deffd47859a88151201f668d016d11ccd2ee6b155da658ce77/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d65999e2891604deffd47859a88151201f668d016d11ccd2ee6b155da658ce77/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-452443",
	                "Source": "/var/lib/docker/volumes/functional-452443/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-452443",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-452443",
	                "name.minikube.sigs.k8s.io": "functional-452443",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cf8f85b7f60f351d095574f692e84ae711dc023f4c840da4e2f2119a4435ef86",
	            "SandboxKey": "/var/run/docker/netns/cf8f85b7f60f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33529"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33530"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33533"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33531"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33532"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-452443": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:c1:98:d2:83:32",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f848f589c6bfb3b7b82d3ac7be21baaedff83a57ab4b9d0973de0715c752022b",
	                    "EndpointID": "a96c3b91746cef9f6bf60a092f5c310e0d47ba93f53e93fd0163a00f798eb508",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-452443",
	                        "9704eb87e161"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-452443 -n functional-452443
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-452443 logs -n 25: (1.031992586s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                            ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ docker-env     │ functional-452443 docker-env                                                                                                                               │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ license        │                                                                                                                                                            │ minikube          │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ image          │ functional-452443 image load --daemon kicbase/echo-server:functional-452443 --alsologtostderr                                                              │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ image          │ functional-452443 image ls                                                                                                                                 │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ image          │ functional-452443 image load --daemon kicbase/echo-server:functional-452443 --alsologtostderr                                                              │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ image          │ functional-452443 image ls                                                                                                                                 │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ image          │ functional-452443 image load --daemon kicbase/echo-server:functional-452443 --alsologtostderr                                                              │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ image          │ functional-452443 image ls                                                                                                                                 │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ image          │ functional-452443 image save kicbase/echo-server:functional-452443 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ image          │ functional-452443 image rm kicbase/echo-server:functional-452443 --alsologtostderr                                                                         │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ image          │ functional-452443 image ls                                                                                                                                 │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ image          │ functional-452443 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr                                       │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ image          │ functional-452443 image ls                                                                                                                                 │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ image          │ functional-452443 image save --daemon kicbase/echo-server:functional-452443 --alsologtostderr                                                              │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ ssh            │ functional-452443 ssh sudo cat /etc/test/nested/copy/807725/hosts                                                                                          │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ update-context │ functional-452443 update-context --alsologtostderr -v=2                                                                                                    │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 11:03 UTC │ 29 Sep 25 11:03 UTC │
	│ update-context │ functional-452443 update-context --alsologtostderr -v=2                                                                                                    │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 11:03 UTC │ 29 Sep 25 11:03 UTC │
	│ update-context │ functional-452443 update-context --alsologtostderr -v=2                                                                                                    │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 11:03 UTC │ 29 Sep 25 11:03 UTC │
	│ image          │ functional-452443 image ls --format short --alsologtostderr                                                                                                │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 11:03 UTC │ 29 Sep 25 11:03 UTC │
	│ image          │ functional-452443 image ls --format yaml --alsologtostderr                                                                                                 │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 11:03 UTC │ 29 Sep 25 11:03 UTC │
	│ ssh            │ functional-452443 ssh pgrep buildkitd                                                                                                                      │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 11:03 UTC │                     │
	│ image          │ functional-452443 image build -t localhost/my-image:functional-452443 testdata/build --alsologtostderr                                                     │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 11:03 UTC │ 29 Sep 25 11:03 UTC │
	│ image          │ functional-452443 image ls                                                                                                                                 │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 11:03 UTC │ 29 Sep 25 11:03 UTC │
	│ image          │ functional-452443 image ls --format json --alsologtostderr                                                                                                 │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 11:03 UTC │ 29 Sep 25 11:03 UTC │
	│ image          │ functional-452443 image ls --format table --alsologtostderr                                                                                                │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 11:03 UTC │ 29 Sep 25 11:03 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 10:57:59
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 10:57:59.167696  874467 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:57:59.167964  874467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:57:59.167972  874467 out.go:374] Setting ErrFile to fd 2...
	I0929 10:57:59.167977  874467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:57:59.168177  874467 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-804126/.minikube/bin
	I0929 10:57:59.168670  874467 out.go:368] Setting JSON to false
	I0929 10:57:59.169730  874467 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9620,"bootTime":1759133859,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:57:59.169842  874467 start.go:140] virtualization: kvm guest
	I0929 10:57:59.171529  874467 out.go:179] * [functional-452443] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 10:57:59.172657  874467 notify.go:220] Checking for updates...
	I0929 10:57:59.172682  874467 out.go:179]   - MINIKUBE_LOCATION=21656
	I0929 10:57:59.173980  874467 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:57:59.175942  874467 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21656-804126/kubeconfig
	I0929 10:57:59.177379  874467 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-804126/.minikube
	I0929 10:57:59.178525  874467 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 10:57:59.179675  874467 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 10:57:59.181246  874467 config.go:182] Loaded profile config "functional-452443": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 10:57:59.181832  874467 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:57:59.206669  874467 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 10:57:59.206763  874467 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:57:59.264759  874467 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-29 10:57:59.252950382 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:57:59.264876  874467 docker.go:318] overlay module found
	I0929 10:57:59.266521  874467 out.go:179] * Using the docker driver based on existing profile
	I0929 10:57:59.267640  874467 start.go:304] selected driver: docker
	I0929 10:57:59.267664  874467 start.go:924] validating driver "docker" against &{Name:functional-452443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-452443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:57:59.267796  874467 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 10:57:59.267991  874467 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:57:59.327057  874467 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-29 10:57:59.317050779 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:57:59.327826  874467 cni.go:84] Creating CNI manager for ""
	I0929 10:57:59.327901  874467 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 10:57:59.327978  874467 start.go:348] cluster config:
	{Name:functional-452443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-452443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false D
isableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:57:59.329787  874467 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Sep 29 10:58:44 functional-452443 dockerd[7109]: time="2025-09-29T10:58:44.984277219Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:58:45 functional-452443 dockerd[7109]: time="2025-09-29T10:58:45.495587918Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 10:58:45 functional-452443 dockerd[7109]: time="2025-09-29T10:58:45.964603763Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:59:02 functional-452443 dockerd[7109]: time="2025-09-29T10:59:02.238485360Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:59:12 functional-452443 dockerd[7109]: time="2025-09-29T10:59:12.249078095Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:59:27 functional-452443 dockerd[7109]: time="2025-09-29T10:59:27.504186417Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 10:59:27 functional-452443 dockerd[7109]: time="2025-09-29T10:59:27.978297937Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:59:30 functional-452443 dockerd[7109]: time="2025-09-29T10:59:30.245673637Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:59:36 functional-452443 dockerd[7109]: time="2025-09-29T10:59:36.496707702Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 10:59:36 functional-452443 dockerd[7109]: time="2025-09-29T10:59:36.972803892Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:59:46 functional-452443 dockerd[7109]: time="2025-09-29T10:59:46.252707998Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:00:33 functional-452443 dockerd[7109]: time="2025-09-29T11:00:33.547993670Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:00:33 functional-452443 cri-dockerd[7873]: time="2025-09-29T11:00:33Z" level=info msg="Stop pulling image docker.io/nginx:alpine: alpine: Pulling from library/nginx"
	Sep 29 11:00:56 functional-452443 dockerd[7109]: time="2025-09-29T11:00:56.504053059Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 11:00:56 functional-452443 dockerd[7109]: time="2025-09-29T11:00:56.983321000Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:00:57 functional-452443 dockerd[7109]: time="2025-09-29T11:00:57.495553292Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 11:00:57 functional-452443 dockerd[7109]: time="2025-09-29T11:00:57.967045366Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:01:02 functional-452443 dockerd[7109]: time="2025-09-29T11:01:02.264921601Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:01:11 functional-452443 dockerd[7109]: time="2025-09-29T11:01:11.251854890Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:03:19 functional-452443 dockerd[7109]: time="2025-09-29T11:03:19.540030433Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:03:19 functional-452443 cri-dockerd[7873]: time="2025-09-29T11:03:19Z" level=info msg="Stop pulling image docker.io/nginx:alpine: alpine: Pulling from library/nginx"
	Sep 29 11:03:40 functional-452443 dockerd[7109]: time="2025-09-29T11:03:40.507435197Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 11:03:40 functional-452443 dockerd[7109]: time="2025-09-29T11:03:40.981566567Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:03:44 functional-452443 dockerd[7109]: time="2025-09-29T11:03:44.499184395Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 11:03:44 functional-452443 dockerd[7109]: time="2025-09-29T11:03:44.980538480Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2820261789a08       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 minutes ago       Exited              mount-munger              0                   f49c812ff73af       busybox-mount
	2b61657715b0f       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           6 minutes ago       Running             echo-server               0                   cf9a947b36ef3       hello-node-75c85bcc94-6dswg
	519d25585d9d8       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           6 minutes ago       Running             echo-server               0                   4c82b414cd2d4       hello-node-connect-7d85dfc575-q79gk
	0602021c398f2       52546a367cc9e                                                                                         6 minutes ago       Running             coredns                   2                   f74afc847e154       coredns-66bc5c9577-5t9hd
	03abdd4cbaff4       6e38f40d628db                                                                                         6 minutes ago       Running             storage-provisioner       2                   bf55a1c2fa8a4       storage-provisioner
	43f526c4f63e3       df0860106674d                                                                                         6 minutes ago       Running             kube-proxy                3                   79b5b939cb7a7       kube-proxy-pxp5b
	992f09f326360       46169d968e920                                                                                         6 minutes ago       Running             kube-scheduler            3                   323d69d75a82f       kube-scheduler-functional-452443
	2af4e44854764       5f1f5298c888d                                                                                         6 minutes ago       Running             etcd                      2                   7df074a2758e8       etcd-functional-452443
	b78af3703b194       a0af72f2ec6d6                                                                                         6 minutes ago       Running             kube-controller-manager   2                   935d1cdbad833       kube-controller-manager-functional-452443
	26e494d7e0a5c       90550c43ad2bc                                                                                         6 minutes ago       Running             kube-apiserver            0                   ed1565ba1327c       kube-apiserver-functional-452443
	0ebdc90c47881       46169d968e920                                                                                         6 minutes ago       Exited              kube-scheduler            2                   16bcf253eb6dd       kube-scheduler-functional-452443
	50fd35bd47b61       df0860106674d                                                                                         6 minutes ago       Exited              kube-proxy                2                   4f6668af3d403       kube-proxy-pxp5b
	294c14d825024       6e38f40d628db                                                                                         7 minutes ago       Exited              storage-provisioner       1                   716b04da95858       storage-provisioner
	812ef773bb7b8       5f1f5298c888d                                                                                         7 minutes ago       Exited              etcd                      1                   a643879a94586       etcd-functional-452443
	f3cc3bf7eecff       a0af72f2ec6d6                                                                                         7 minutes ago       Exited              kube-controller-manager   1                   9cdfe23701ac1       kube-controller-manager-functional-452443
	69b6d4a058709       52546a367cc9e                                                                                         7 minutes ago       Exited              coredns                   1                   235e3d5e1d777       coredns-66bc5c9577-5t9hd
	
	
	==> coredns [0602021c398f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49182 - 5842 "HINFO IN 5293055615066822950.8147071532598776670. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.029161007s
	
	
	==> coredns [69b6d4a05870] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50093 - 2427 "HINFO IN 5085816626060849129.4149907659589998759. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021678816s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-452443
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-452443
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c1f958e1d15faaa2b94ae7399d1155627e45fcf8
	                    minikube.k8s.io/name=functional-452443
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T10_55_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 10:55:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-452443
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 11:03:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 11:03:30 +0000   Mon, 29 Sep 2025 10:54:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 11:03:30 +0000   Mon, 29 Sep 2025 10:54:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 11:03:30 +0000   Mon, 29 Sep 2025 10:54:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 11:03:30 +0000   Mon, 29 Sep 2025 10:55:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-452443
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 1adfbe2ba45d45b69072dff918b31afa
	  System UUID:                555a974c-1461-41c6-a7c0-fcd5a6458b4b
	  Boot ID:                    2fed06dd-7ca1-49fc-8cb4-296c71642eea
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-6dswg                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m6s
	  default                     hello-node-connect-7d85dfc575-q79gk           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m7s
	  default                     mysql-5bb876957f-vwljh                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     5m34s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m7s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 coredns-66bc5c9577-5t9hd                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     8m43s
	  kube-system                 etcd-functional-452443                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m49s
	  kube-system                 kube-apiserver-functional-452443              250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 kube-controller-manager-functional-452443     200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m50s
	  kube-system                 kube-proxy-pxp5b                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m43s
	  kube-system                 kube-scheduler-functional-452443              100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m49s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m42s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-2cf6d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-wsddv         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%)  700m (8%)
	  memory             682Mi (2%)   870Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m42s                  kube-proxy       
	  Normal  Starting                 6m26s                  kube-proxy       
	  Normal  Starting                 7m23s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  8m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m49s                  kubelet          Node functional-452443 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m49s                  kubelet          Node functional-452443 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m49s                  kubelet          Node functional-452443 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m49s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m44s                  node-controller  Node functional-452443 event: Registered Node functional-452443 in Controller
	  Normal  RegisteredNode           7m24s                  node-controller  Node functional-452443 event: Registered Node functional-452443 in Controller
	  Normal  Starting                 6m31s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m31s (x8 over 6m31s)  kubelet          Node functional-452443 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m31s (x8 over 6m31s)  kubelet          Node functional-452443 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m31s (x7 over 6m31s)  kubelet          Node functional-452443 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m25s                  node-controller  Node functional-452443 event: Registered Node functional-452443 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 02 df 88 08 78 89 08 06
	[  +0.326891] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 b3 ea 3d 15 d3 08 06
	[  +0.292071] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 79 3e 6e 1c 85 08 06
	[Sep29 10:38] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 fd 85 4d 34 7c 08 06
	[  +6.578912] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0a 8d 25 f9 d3 f7 08 06
	[  +0.000514] IPv4: martian source 10.244.0.25 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 79 85 72 eb a1 08 06
	[Sep29 10:45] IPv4: martian source 10.244.0.1 from 10.244.0.28, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 20 b3 46 3c a7 08 06
	[  +0.000504] IPv4: martian source 10.244.0.28 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 79 85 72 eb a1 08 06
	[  +0.000784] IPv4: martian source 10.244.0.28 from 10.244.0.6, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 86 48 7b 6f c2 51 08 06
	[Sep29 10:55] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 29 b2 98 63 f0 08 06
	[  +0.000053] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ee 0c 04 f0 eb c2 08 06
	[Sep29 10:56] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 80 4d 08 0b ed 08 06
	[Sep29 10:57] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1a fe 75 57 99 97 08 06
	
	
	==> etcd [2af4e4485476] <==
	{"level":"warn","ts":"2025-09-29T10:57:22.722736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.731453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.737925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.747753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.753900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.760147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.767006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.772906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.779636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.786632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.792923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.799227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.805607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.811797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.818041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.824481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.831879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.838301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.845005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.851319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.857432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.873458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.880738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.888634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.942452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48558","server-name":"","error":"EOF"}
	
	
	==> etcd [812ef773bb7b] <==
	{"level":"warn","ts":"2025-09-29T10:56:23.959078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:56:23.972590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:56:23.979257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:56:23.985825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:56:23.998562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:56:24.010682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:56:24.053788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46968","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T10:57:06.088584Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T10:57:06.088717Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-452443","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-29T10:57:06.088843Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T10:57:13.091154Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T10:57:13.091298Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-29T10:57:13.091392Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T10:57:13.091484Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-09-29T10:57:13.091458Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"error","ts":"2025-09-29T10:57:13.091495Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-29T10:57:13.091509Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T10:57:13.091545Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T10:57:13.091556Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T10:57:13.091560Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-29T10:57:13.091579Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-09-29T10:57:13.094068Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-29T10:57:13.094123Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T10:57:13.094153Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-29T10:57:13.094166Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-452443","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 11:03:52 up  2:46,  0 users,  load average: 0.29, 0.34, 0.51
	Linux functional-452443 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [26e494d7e0a5] <==
	W0929 10:57:24.613872       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0929 10:57:24.615872       1 controller.go:667] quota admission added evaluator for: endpoints
	I0929 10:57:24.960119       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0929 10:57:24.994220       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0929 10:57:25.018075       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0929 10:57:25.025444       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0929 10:57:27.081844       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0929 10:57:27.130221       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0929 10:57:39.595785       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.105.35.56"}
	I0929 10:57:44.645978       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.98.184.164"}
	I0929 10:57:44.672462       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.107.3.82"}
	I0929 10:57:45.655783       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.109.27.140"}
	I0929 10:58:00.303776       1 controller.go:667] quota admission added evaluator for: namespaces
	I0929 10:58:00.433906       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.110.72"}
	I0929 10:58:00.448384       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.2.211"}
	I0929 10:58:17.582425       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.108.119.205"}
	I0929 10:58:41.256974       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:58:50.647048       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:59:41.307768       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:00:16.165748       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:00:56.805471       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:01:41.587948       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:02:06.639953       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:02:51.141345       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:03:10.521754       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [b78af3703b19] <==
	I0929 10:57:26.727219       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0929 10:57:26.727279       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0929 10:57:26.728038       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0929 10:57:26.728325       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0929 10:57:26.729228       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0929 10:57:26.729271       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0929 10:57:26.732345       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 10:57:26.734684       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0929 10:57:26.735866       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 10:57:26.739928       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0929 10:57:26.742296       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0929 10:57:26.742395       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 10:57:26.742501       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-452443"
	I0929 10:57:26.742552       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 10:57:26.744811       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0929 10:57:26.747472       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0929 10:57:26.750035       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 10:57:26.755446       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0929 10:58:00.364718       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:58:00.371620       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:58:00.374614       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:58:00.376184       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:58:00.379516       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:58:00.388773       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:58:00.389833       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [f3cc3bf7eecf] <==
	I0929 10:56:27.853499       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0929 10:56:27.855898       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0929 10:56:27.858171       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0929 10:56:27.860467       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 10:56:27.882000       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 10:56:27.882054       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0929 10:56:27.882108       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0929 10:56:27.882136       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0929 10:56:27.882139       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0929 10:56:27.882252       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 10:56:27.882285       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0929 10:56:27.882348       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0929 10:56:27.882410       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0929 10:56:27.882582       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 10:56:27.882699       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-452443"
	I0929 10:56:27.882782       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 10:56:27.884650       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0929 10:56:27.886398       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0929 10:56:27.888962       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 10:56:27.889215       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 10:56:27.892353       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 10:56:27.913736       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 10:56:27.913764       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 10:56:27.913765       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 10:56:27.913774       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [43f526c4f63e] <==
	I0929 10:57:24.726005       1 server_linux.go:53] "Using iptables proxy"
	I0929 10:57:24.788536       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 10:57:24.889125       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 10:57:24.889190       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 10:57:24.889318       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 10:57:24.919481       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 10:57:24.919547       1 server_linux.go:132] "Using iptables Proxier"
	I0929 10:57:24.927324       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 10:57:24.928652       1 server.go:527] "Version info" version="v1.34.0"
	I0929 10:57:24.929506       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:57:24.932342       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 10:57:24.932363       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 10:57:24.932388       1 config.go:200] "Starting service config controller"
	I0929 10:57:24.932395       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 10:57:24.932439       1 config.go:106] "Starting endpoint slice config controller"
	I0929 10:57:24.932445       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 10:57:24.932488       1 config.go:309] "Starting node config controller"
	I0929 10:57:24.932507       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 10:57:25.032561       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 10:57:25.032594       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 10:57:25.032610       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 10:57:25.032610       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [50fd35bd47b6] <==
	I0929 10:57:18.304680       1 server_linux.go:53] "Using iptables proxy"
	I0929 10:57:18.365710       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0929 10:57:18.366737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-452443&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-scheduler [0ebdc90c4788] <==
	I0929 10:57:18.942790       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [992f09f32636] <==
	I0929 10:57:21.365688       1 serving.go:386] Generated self-signed cert in-memory
	W0929 10:57:23.324838       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 10:57:23.324961       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 10:57:23.324996       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 10:57:23.325042       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 10:57:23.350031       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 10:57:23.350072       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:57:23.352236       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:57:23.352279       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:57:23.352567       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 10:57:23.352863       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 10:57:23.453195       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 11:03:06 functional-452443 kubelet[8829]: E0929 11:03:06.262261    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="7db0e382-55c4-415d-8eab-de12640ee91a"
	Sep 29 11:03:10 functional-452443 kubelet[8829]: E0929 11:03:10.265908    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wsddv" podUID="a3e360df-cf29-4d99-91d6-aaae27aa6cfe"
	Sep 29 11:03:12 functional-452443 kubelet[8829]: E0929 11:03:12.260380    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="89090c16-9901-49ad-961f-7eccda79d3e3"
	Sep 29 11:03:14 functional-452443 kubelet[8829]: E0929 11:03:14.263132    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-2cf6d" podUID="e461b8e2-b1f4-4815-88e2-19e37959ef47"
	Sep 29 11:03:16 functional-452443 kubelet[8829]: E0929 11:03:16.262224    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-vwljh" podUID="7b01fd21-a3ab-4ad8-a320-deba03206503"
	Sep 29 11:03:19 functional-452443 kubelet[8829]: E0929 11:03:19.542506    8829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 29 11:03:19 functional-452443 kubelet[8829]: E0929 11:03:19.542583    8829 kuberuntime_image.go:43] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 29 11:03:19 functional-452443 kubelet[8829]: E0929 11:03:19.542717    8829 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx-svc_default(7db0e382-55c4-415d-8eab-de12640ee91a): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 11:03:19 functional-452443 kubelet[8829]: E0929 11:03:19.542761    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="7db0e382-55c4-415d-8eab-de12640ee91a"
	Sep 29 11:03:25 functional-452443 kubelet[8829]: E0929 11:03:25.262525    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wsddv" podUID="a3e360df-cf29-4d99-91d6-aaae27aa6cfe"
	Sep 29 11:03:27 functional-452443 kubelet[8829]: E0929 11:03:27.260826    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="89090c16-9901-49ad-961f-7eccda79d3e3"
	Sep 29 11:03:28 functional-452443 kubelet[8829]: E0929 11:03:28.262189    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-vwljh" podUID="7b01fd21-a3ab-4ad8-a320-deba03206503"
	Sep 29 11:03:29 functional-452443 kubelet[8829]: E0929 11:03:29.263015    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-2cf6d" podUID="e461b8e2-b1f4-4815-88e2-19e37959ef47"
	Sep 29 11:03:33 functional-452443 kubelet[8829]: E0929 11:03:33.262724    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="7db0e382-55c4-415d-8eab-de12640ee91a"
	Sep 29 11:03:40 functional-452443 kubelet[8829]: E0929 11:03:40.984113    8829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 11:03:40 functional-452443 kubelet[8829]: E0929 11:03:40.984188    8829 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 11:03:40 functional-452443 kubelet[8829]: E0929 11:03:40.984292    8829 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-wsddv_kubernetes-dashboard(a3e360df-cf29-4d99-91d6-aaae27aa6cfe): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 11:03:40 functional-452443 kubelet[8829]: E0929 11:03:40.984331    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wsddv" podUID="a3e360df-cf29-4d99-91d6-aaae27aa6cfe"
	Sep 29 11:03:42 functional-452443 kubelet[8829]: E0929 11:03:42.260390    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="89090c16-9901-49ad-961f-7eccda79d3e3"
	Sep 29 11:03:42 functional-452443 kubelet[8829]: E0929 11:03:42.262197    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-vwljh" podUID="7b01fd21-a3ab-4ad8-a320-deba03206503"
	Sep 29 11:03:44 functional-452443 kubelet[8829]: E0929 11:03:44.982848    8829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 11:03:44 functional-452443 kubelet[8829]: E0929 11:03:44.982930    8829 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 11:03:44 functional-452443 kubelet[8829]: E0929 11:03:44.983051    8829 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-2cf6d_kubernetes-dashboard(e461b8e2-b1f4-4815-88e2-19e37959ef47): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 11:03:44 functional-452443 kubelet[8829]: E0929 11:03:44.983098    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-2cf6d" podUID="e461b8e2-b1f4-4815-88e2-19e37959ef47"
	Sep 29 11:03:48 functional-452443 kubelet[8829]: E0929 11:03:48.263001    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="7db0e382-55c4-415d-8eab-de12640ee91a"
	
	
	==> storage-provisioner [03abdd4cbaff] <==
	W0929 11:03:27.639407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:03:29.642699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:03:29.646854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:03:31.649878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:03:31.654027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:03:33.657348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:03:33.662374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:03:35.666043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:03:35.671364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:03:37.674969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:03:37.679284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:03:39.682518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:03:39.686802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:03:41.690328       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:03:41.694341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:03:43.697646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:03:43.702934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:03:45.706136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:03:45.709850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:03:47.713471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:03:47.717430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:03:49.720926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:03:49.726168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:03:51.729649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:03:51.733791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [294c14d82502] <==
	I0929 10:56:45.320881       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4a43b7f2-61d2-46e8-9782-7b3e373f54d4", APIVersion:"v1", ResourceVersion:"558", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-452443_682bbbcc-ebb4-4ad2-8adb-3ba01048abeb became leader
	I0929 10:56:45.320924       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-452443_682bbbcc-ebb4-4ad2-8adb-3ba01048abeb!
	W0929 10:56:45.323253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:45.327714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0929 10:56:45.421684       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-452443_682bbbcc-ebb4-4ad2-8adb-3ba01048abeb!
	W0929 10:56:47.330849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:47.335388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:49.339584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:49.345321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:51.348084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:51.352718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:53.356066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:53.360167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:55.363307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:55.367705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:57.370570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:57.375034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:59.378481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:59.383711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:57:01.386943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:57:01.391270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:57:03.394081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:57:03.398268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:57:05.402241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:57:05.406289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-452443 -n functional-452443
helpers_test.go:269: (dbg) Run:  kubectl --context functional-452443 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-5bb876957f-vwljh nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-2cf6d kubernetes-dashboard-855c9754f9-wsddv
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-452443 describe pod busybox-mount mysql-5bb876957f-vwljh nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-2cf6d kubernetes-dashboard-855c9754f9-wsddv
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-452443 describe pod busybox-mount mysql-5bb876957f-vwljh nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-2cf6d kubernetes-dashboard-855c9754f9-wsddv: exit status 1 (93.35938ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-452443/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 10:57:57 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://2820261789a088603ce8063bfe2c7aaf645df80c99e482409fafdaac2ce2bdd1
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 29 Sep 2025 10:58:00 +0000
	      Finished:     Mon, 29 Sep 2025 10:58:00 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xpfzv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-xpfzv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m55s  default-scheduler  Successfully assigned default/busybox-mount to functional-452443
	  Normal  Pulling    5m54s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m52s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.873s (1.873s including waiting). Image size: 4403845 bytes.
	  Normal  Created    5m52s  kubelet            Created container: mount-munger
	  Normal  Started    5m52s  kubelet            Started container mount-munger
	
	
	Name:             mysql-5bb876957f-vwljh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-452443/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 10:58:17 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.14
	IPs:
	  IP:           10.244.0.14
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f9cxl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-f9cxl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m35s                  default-scheduler  Successfully assigned default/mysql-5bb876957f-vwljh to functional-452443
	  Normal   Pulling    2m42s (x5 over 5m34s)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     2m41s (x5 over 5m33s)  kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m41s (x5 over 5m33s)  kubelet            Error: ErrImagePull
	  Warning  Failed     24s (x20 over 5m32s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    10s (x21 over 5m32s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-452443/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 10:57:44 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4pqtp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-4pqtp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m8s                   default-scheduler  Successfully assigned default/nginx-svc to functional-452443
	  Warning  Failed     4m40s (x3 over 5m48s)  kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m20s (x5 over 6m7s)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     3m19s (x2 over 6m4s)   kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m19s (x5 over 6m4s)   kubelet            Error: ErrImagePull
	  Warning  Failed     60s (x20 over 6m4s)    kubelet            Error: ImagePullBackOff
	  Normal   BackOff    46s (x21 over 6m4s)    kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-452443/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 10:57:50 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xcvcq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-xcvcq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m2s                  default-scheduler  Successfully assigned default/sp-pod to functional-452443
	  Normal   Pulling    2m51s (x5 over 6m2s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     2m50s (x5 over 6m1s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m50s (x5 over 6m1s)  kubelet            Error: ErrImagePull
	  Warning  Failed     51s (x20 over 6m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    40s (x21 over 6m)     kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-2cf6d" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-wsddv" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-452443 describe pod busybox-mount mysql-5bb876957f-vwljh nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-2cf6d kubernetes-dashboard-855c9754f9-wsddv: exit status 1
E0929 11:04:06.043193  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (368.70s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-452443 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-vwljh" [7b01fd21-a3ab-4ad8-a320-deba03206503] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
E0929 10:58:38.341492  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:58:38.347917  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:58:38.359332  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:58:38.380718  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:58:38.422219  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:58:38.503856  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:58:38.665525  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:58:38.987047  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:58:39.628622  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:58:40.910317  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:58:43.472211  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:58:48.594368  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:58:58.836523  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:59:19.318125  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:00:00.279612  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:01:22.201764  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1804: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-452443 -n functional-452443
functional_test.go:1804: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-09-29 11:08:17.942027469 +0000 UTC m=+1951.777946287
functional_test.go:1804: (dbg) Run:  kubectl --context functional-452443 describe po mysql-5bb876957f-vwljh -n default
functional_test.go:1804: (dbg) kubectl --context functional-452443 describe po mysql-5bb876957f-vwljh -n default:
Name:             mysql-5bb876957f-vwljh
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-452443/192.168.49.2
Start Time:       Mon, 29 Sep 2025 10:58:17 +0000
Labels:           app=mysql
pod-template-hash=5bb876957f
Annotations:      <none>
Status:           Pending
IP:               10.244.0.14
IPs:
IP:           10.244.0.14
Controlled By:  ReplicaSet/mysql-5bb876957f
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP (mysql)
Host Port:      0/TCP (mysql)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f9cxl (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-f9cxl:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/mysql-5bb876957f-vwljh to functional-452443
Normal   Pulling    7m8s (x5 over 10m)      kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     7m7s (x5 over 9m59s)    kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     7m7s (x5 over 9m59s)    kubelet            Error: ErrImagePull
Warning  Failed     4m50s (x20 over 9m58s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m36s (x21 over 9m58s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
functional_test.go:1804: (dbg) Run:  kubectl --context functional-452443 logs mysql-5bb876957f-vwljh -n default
functional_test.go:1804: (dbg) Non-zero exit: kubectl --context functional-452443 logs mysql-5bb876957f-vwljh -n default: exit status 1 (69.420352ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-5bb876957f-vwljh" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1804: kubectl --context functional-452443 logs mysql-5bb876957f-vwljh -n default: exit status 1
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-452443
helpers_test.go:243: (dbg) docker inspect functional-452443:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9704eb87e1610fbf8e5d61036532488fba16172de2fc91d3a2019eef75c75e90",
	        "Created": "2025-09-29T10:54:46.577482052Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 856851,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T10:54:46.608065606Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/9704eb87e1610fbf8e5d61036532488fba16172de2fc91d3a2019eef75c75e90/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9704eb87e1610fbf8e5d61036532488fba16172de2fc91d3a2019eef75c75e90/hostname",
	        "HostsPath": "/var/lib/docker/containers/9704eb87e1610fbf8e5d61036532488fba16172de2fc91d3a2019eef75c75e90/hosts",
	        "LogPath": "/var/lib/docker/containers/9704eb87e1610fbf8e5d61036532488fba16172de2fc91d3a2019eef75c75e90/9704eb87e1610fbf8e5d61036532488fba16172de2fc91d3a2019eef75c75e90-json.log",
	        "Name": "/functional-452443",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-452443:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-452443",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9704eb87e1610fbf8e5d61036532488fba16172de2fc91d3a2019eef75c75e90",
	                "LowerDir": "/var/lib/docker/overlay2/d65999e2891604deffd47859a88151201f668d016d11ccd2ee6b155da658ce77-init/diff:/var/lib/docker/overlay2/8785ead96546661033f04e32798b1ac8b184b1c04fc8862515b87d56909b6b9d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d65999e2891604deffd47859a88151201f668d016d11ccd2ee6b155da658ce77/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d65999e2891604deffd47859a88151201f668d016d11ccd2ee6b155da658ce77/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d65999e2891604deffd47859a88151201f668d016d11ccd2ee6b155da658ce77/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-452443",
	                "Source": "/var/lib/docker/volumes/functional-452443/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-452443",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-452443",
	                "name.minikube.sigs.k8s.io": "functional-452443",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cf8f85b7f60f351d095574f692e84ae711dc023f4c840da4e2f2119a4435ef86",
	            "SandboxKey": "/var/run/docker/netns/cf8f85b7f60f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33529"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33530"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33533"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33531"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33532"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-452443": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:c1:98:d2:83:32",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f848f589c6bfb3b7b82d3ac7be21baaedff83a57ab4b9d0973de0715c752022b",
	                    "EndpointID": "a96c3b91746cef9f6bf60a092f5c310e0d47ba93f53e93fd0163a00f798eb508",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-452443",
	                        "9704eb87e161"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-452443 -n functional-452443
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-452443 logs -n 25: (1.037031786s)
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                            ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ docker-env     │ functional-452443 docker-env                                                                                                                               │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ license        │                                                                                                                                                            │ minikube          │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ image          │ functional-452443 image load --daemon kicbase/echo-server:functional-452443 --alsologtostderr                                                              │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ image          │ functional-452443 image ls                                                                                                                                 │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ image          │ functional-452443 image load --daemon kicbase/echo-server:functional-452443 --alsologtostderr                                                              │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ image          │ functional-452443 image ls                                                                                                                                 │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ image          │ functional-452443 image load --daemon kicbase/echo-server:functional-452443 --alsologtostderr                                                              │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ image          │ functional-452443 image ls                                                                                                                                 │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ image          │ functional-452443 image save kicbase/echo-server:functional-452443 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ image          │ functional-452443 image rm kicbase/echo-server:functional-452443 --alsologtostderr                                                                         │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ image          │ functional-452443 image ls                                                                                                                                 │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ image          │ functional-452443 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr                                       │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ image          │ functional-452443 image ls                                                                                                                                 │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ image          │ functional-452443 image save --daemon kicbase/echo-server:functional-452443 --alsologtostderr                                                              │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ ssh            │ functional-452443 ssh sudo cat /etc/test/nested/copy/807725/hosts                                                                                          │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 10:58 UTC │ 29 Sep 25 10:58 UTC │
	│ update-context │ functional-452443 update-context --alsologtostderr -v=2                                                                                                    │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 11:03 UTC │ 29 Sep 25 11:03 UTC │
	│ update-context │ functional-452443 update-context --alsologtostderr -v=2                                                                                                    │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 11:03 UTC │ 29 Sep 25 11:03 UTC │
	│ update-context │ functional-452443 update-context --alsologtostderr -v=2                                                                                                    │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 11:03 UTC │ 29 Sep 25 11:03 UTC │
	│ image          │ functional-452443 image ls --format short --alsologtostderr                                                                                                │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 11:03 UTC │ 29 Sep 25 11:03 UTC │
	│ image          │ functional-452443 image ls --format yaml --alsologtostderr                                                                                                 │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 11:03 UTC │ 29 Sep 25 11:03 UTC │
	│ ssh            │ functional-452443 ssh pgrep buildkitd                                                                                                                      │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 11:03 UTC │                     │
	│ image          │ functional-452443 image build -t localhost/my-image:functional-452443 testdata/build --alsologtostderr                                                     │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 11:03 UTC │ 29 Sep 25 11:03 UTC │
	│ image          │ functional-452443 image ls                                                                                                                                 │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 11:03 UTC │ 29 Sep 25 11:03 UTC │
	│ image          │ functional-452443 image ls --format json --alsologtostderr                                                                                                 │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 11:03 UTC │ 29 Sep 25 11:03 UTC │
	│ image          │ functional-452443 image ls --format table --alsologtostderr                                                                                                │ functional-452443 │ jenkins │ v1.37.0 │ 29 Sep 25 11:03 UTC │ 29 Sep 25 11:03 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 10:57:59
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 10:57:59.167696  874467 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:57:59.167964  874467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:57:59.167972  874467 out.go:374] Setting ErrFile to fd 2...
	I0929 10:57:59.167977  874467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:57:59.168177  874467 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-804126/.minikube/bin
	I0929 10:57:59.168670  874467 out.go:368] Setting JSON to false
	I0929 10:57:59.169730  874467 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9620,"bootTime":1759133859,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:57:59.169842  874467 start.go:140] virtualization: kvm guest
	I0929 10:57:59.171529  874467 out.go:179] * [functional-452443] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 10:57:59.172657  874467 notify.go:220] Checking for updates...
	I0929 10:57:59.172682  874467 out.go:179]   - MINIKUBE_LOCATION=21656
	I0929 10:57:59.173980  874467 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:57:59.175942  874467 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21656-804126/kubeconfig
	I0929 10:57:59.177379  874467 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-804126/.minikube
	I0929 10:57:59.178525  874467 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 10:57:59.179675  874467 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 10:57:59.181246  874467 config.go:182] Loaded profile config "functional-452443": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 10:57:59.181832  874467 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:57:59.206669  874467 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 10:57:59.206763  874467 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:57:59.264759  874467 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-29 10:57:59.252950382 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:57:59.264876  874467 docker.go:318] overlay module found
	I0929 10:57:59.266521  874467 out.go:179] * Using the docker driver based on existing profile
	I0929 10:57:59.267640  874467 start.go:304] selected driver: docker
	I0929 10:57:59.267664  874467 start.go:924] validating driver "docker" against &{Name:functional-452443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-452443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:57:59.267796  874467 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 10:57:59.267991  874467 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:57:59.327057  874467 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-29 10:57:59.317050779 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:57:59.327826  874467 cni.go:84] Creating CNI manager for ""
	I0929 10:57:59.327901  874467 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 10:57:59.327978  874467 start.go:348] cluster config:
	{Name:functional-452443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-452443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false D
isableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:57:59.329787  874467 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Sep 29 10:58:45 functional-452443 dockerd[7109]: time="2025-09-29T10:58:45.964603763Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:59:02 functional-452443 dockerd[7109]: time="2025-09-29T10:59:02.238485360Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:59:12 functional-452443 dockerd[7109]: time="2025-09-29T10:59:12.249078095Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:59:27 functional-452443 dockerd[7109]: time="2025-09-29T10:59:27.504186417Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 10:59:27 functional-452443 dockerd[7109]: time="2025-09-29T10:59:27.978297937Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:59:30 functional-452443 dockerd[7109]: time="2025-09-29T10:59:30.245673637Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:59:36 functional-452443 dockerd[7109]: time="2025-09-29T10:59:36.496707702Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 10:59:36 functional-452443 dockerd[7109]: time="2025-09-29T10:59:36.972803892Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 10:59:46 functional-452443 dockerd[7109]: time="2025-09-29T10:59:46.252707998Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:00:33 functional-452443 dockerd[7109]: time="2025-09-29T11:00:33.547993670Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:00:33 functional-452443 cri-dockerd[7873]: time="2025-09-29T11:00:33Z" level=info msg="Stop pulling image docker.io/nginx:alpine: alpine: Pulling from library/nginx"
	Sep 29 11:00:56 functional-452443 dockerd[7109]: time="2025-09-29T11:00:56.504053059Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 11:00:56 functional-452443 dockerd[7109]: time="2025-09-29T11:00:56.983321000Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:00:57 functional-452443 dockerd[7109]: time="2025-09-29T11:00:57.495553292Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 11:00:57 functional-452443 dockerd[7109]: time="2025-09-29T11:00:57.967045366Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:01:02 functional-452443 dockerd[7109]: time="2025-09-29T11:01:02.264921601Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:01:11 functional-452443 dockerd[7109]: time="2025-09-29T11:01:11.251854890Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:03:19 functional-452443 dockerd[7109]: time="2025-09-29T11:03:19.540030433Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:03:19 functional-452443 cri-dockerd[7873]: time="2025-09-29T11:03:19Z" level=info msg="Stop pulling image docker.io/nginx:alpine: alpine: Pulling from library/nginx"
	Sep 29 11:03:40 functional-452443 dockerd[7109]: time="2025-09-29T11:03:40.507435197Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 11:03:40 functional-452443 dockerd[7109]: time="2025-09-29T11:03:40.981566567Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:03:44 functional-452443 dockerd[7109]: time="2025-09-29T11:03:44.499184395Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 11:03:44 functional-452443 dockerd[7109]: time="2025-09-29T11:03:44.980538480Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:03:55 functional-452443 dockerd[7109]: time="2025-09-29T11:03:55.248117924Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:03:57 functional-452443 dockerd[7109]: time="2025-09-29T11:03:57.241539364Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2820261789a08       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   10 minutes ago      Exited              mount-munger              0                   f49c812ff73af       busybox-mount
	2b61657715b0f       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           10 minutes ago      Running             echo-server               0                   cf9a947b36ef3       hello-node-75c85bcc94-6dswg
	519d25585d9d8       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           10 minutes ago      Running             echo-server               0                   4c82b414cd2d4       hello-node-connect-7d85dfc575-q79gk
	0602021c398f2       52546a367cc9e                                                                                         10 minutes ago      Running             coredns                   2                   f74afc847e154       coredns-66bc5c9577-5t9hd
	03abdd4cbaff4       6e38f40d628db                                                                                         10 minutes ago      Running             storage-provisioner       2                   bf55a1c2fa8a4       storage-provisioner
	43f526c4f63e3       df0860106674d                                                                                         10 minutes ago      Running             kube-proxy                3                   79b5b939cb7a7       kube-proxy-pxp5b
	992f09f326360       46169d968e920                                                                                         10 minutes ago      Running             kube-scheduler            3                   323d69d75a82f       kube-scheduler-functional-452443
	2af4e44854764       5f1f5298c888d                                                                                         10 minutes ago      Running             etcd                      2                   7df074a2758e8       etcd-functional-452443
	b78af3703b194       a0af72f2ec6d6                                                                                         10 minutes ago      Running             kube-controller-manager   2                   935d1cdbad833       kube-controller-manager-functional-452443
	26e494d7e0a5c       90550c43ad2bc                                                                                         10 minutes ago      Running             kube-apiserver            0                   ed1565ba1327c       kube-apiserver-functional-452443
	0ebdc90c47881       46169d968e920                                                                                         11 minutes ago      Exited              kube-scheduler            2                   16bcf253eb6dd       kube-scheduler-functional-452443
	50fd35bd47b61       df0860106674d                                                                                         11 minutes ago      Exited              kube-proxy                2                   4f6668af3d403       kube-proxy-pxp5b
	294c14d825024       6e38f40d628db                                                                                         11 minutes ago      Exited              storage-provisioner       1                   716b04da95858       storage-provisioner
	812ef773bb7b8       5f1f5298c888d                                                                                         11 minutes ago      Exited              etcd                      1                   a643879a94586       etcd-functional-452443
	f3cc3bf7eecff       a0af72f2ec6d6                                                                                         11 minutes ago      Exited              kube-controller-manager   1                   9cdfe23701ac1       kube-controller-manager-functional-452443
	69b6d4a058709       52546a367cc9e                                                                                         12 minutes ago      Exited              coredns                   1                   235e3d5e1d777       coredns-66bc5c9577-5t9hd
	
	
	==> coredns [0602021c398f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49182 - 5842 "HINFO IN 5293055615066822950.8147071532598776670. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.029161007s
	
	
	==> coredns [69b6d4a05870] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50093 - 2427 "HINFO IN 5085816626060849129.4149907659589998759. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021678816s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-452443
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-452443
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c1f958e1d15faaa2b94ae7399d1155627e45fcf8
	                    minikube.k8s.io/name=functional-452443
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T10_55_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 10:55:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-452443
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 11:08:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 11:03:30 +0000   Mon, 29 Sep 2025 10:54:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 11:03:30 +0000   Mon, 29 Sep 2025 10:54:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 11:03:30 +0000   Mon, 29 Sep 2025 10:54:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 11:03:30 +0000   Mon, 29 Sep 2025 10:55:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-452443
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 1adfbe2ba45d45b69072dff918b31afa
	  System UUID:                555a974c-1461-41c6-a7c0-fcd5a6458b4b
	  Boot ID:                    2fed06dd-7ca1-49fc-8cb4-296c71642eea
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-6dswg                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-q79gk           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-vwljh                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-5t9hd                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     13m
	  kube-system                 etcd-functional-452443                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         13m
	  kube-system                 kube-apiserver-functional-452443              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-452443     200m (2%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-pxp5b                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-functional-452443              100m (1%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-2cf6d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-wsddv         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%)  700m (8%)
	  memory             682Mi (2%)   870Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node functional-452443 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node functional-452443 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node functional-452443 status is now: NodeHasSufficientPID
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           13m                node-controller  Node functional-452443 event: Registered Node functional-452443 in Controller
	  Normal  RegisteredNode           11m                node-controller  Node functional-452443 event: Registered Node functional-452443 in Controller
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-452443 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-452443 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-452443 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node functional-452443 event: Registered Node functional-452443 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 02 df 88 08 78 89 08 06
	[  +0.326891] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 b3 ea 3d 15 d3 08 06
	[  +0.292071] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 79 3e 6e 1c 85 08 06
	[Sep29 10:38] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 fd 85 4d 34 7c 08 06
	[  +6.578912] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0a 8d 25 f9 d3 f7 08 06
	[  +0.000514] IPv4: martian source 10.244.0.25 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 79 85 72 eb a1 08 06
	[Sep29 10:45] IPv4: martian source 10.244.0.1 from 10.244.0.28, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 20 b3 46 3c a7 08 06
	[  +0.000504] IPv4: martian source 10.244.0.28 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 79 85 72 eb a1 08 06
	[  +0.000784] IPv4: martian source 10.244.0.28 from 10.244.0.6, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 86 48 7b 6f c2 51 08 06
	[Sep29 10:55] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 29 b2 98 63 f0 08 06
	[  +0.000053] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ee 0c 04 f0 eb c2 08 06
	[Sep29 10:56] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 80 4d 08 0b ed 08 06
	[Sep29 10:57] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1a fe 75 57 99 97 08 06
	
	
	==> etcd [2af4e4485476] <==
	{"level":"warn","ts":"2025-09-29T10:57:22.747753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.753900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.760147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.767006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.772906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.779636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.786632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.792923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.799227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.805607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.811797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.818041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.824481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.831879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.838301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.845005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.851319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.857432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.873458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.880738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.888634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:22.942452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48558","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T11:07:22.464194Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1381}
	{"level":"info","ts":"2025-09-29T11:07:22.484354Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1381,"took":"19.797118ms","hash":2262753498,"current-db-size-bytes":3956736,"current-db-size":"4.0 MB","current-db-size-in-use-bytes":2109440,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2025-09-29T11:07:22.484447Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2262753498,"revision":1381,"compact-revision":-1}
	
	
	==> etcd [812ef773bb7b] <==
	{"level":"warn","ts":"2025-09-29T10:56:23.959078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:56:23.972590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:56:23.979257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:56:23.985825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:56:23.998562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:56:24.010682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:56:24.053788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46968","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T10:57:06.088584Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T10:57:06.088717Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-452443","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-29T10:57:06.088843Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T10:57:13.091154Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T10:57:13.091298Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-29T10:57:13.091392Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T10:57:13.091484Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-09-29T10:57:13.091458Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"error","ts":"2025-09-29T10:57:13.091495Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-29T10:57:13.091509Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T10:57:13.091545Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T10:57:13.091556Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T10:57:13.091560Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-29T10:57:13.091579Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-09-29T10:57:13.094068Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-29T10:57:13.094123Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T10:57:13.094153Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-29T10:57:13.094166Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-452443","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 11:08:19 up  2:50,  0 users,  load average: 0.21, 0.21, 0.41
	Linux functional-452443 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [26e494d7e0a5] <==
	I0929 10:57:44.645978       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.98.184.164"}
	I0929 10:57:44.672462       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.107.3.82"}
	I0929 10:57:45.655783       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.109.27.140"}
	I0929 10:58:00.303776       1 controller.go:667] quota admission added evaluator for: namespaces
	I0929 10:58:00.433906       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.110.72"}
	I0929 10:58:00.448384       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.2.211"}
	I0929 10:58:17.582425       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.108.119.205"}
	I0929 10:58:41.256974       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:58:50.647048       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:59:41.307768       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:00:16.165748       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:00:56.805471       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:01:41.587948       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:02:06.639953       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:02:51.141345       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:03:10.521754       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:04:08.600482       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:04:27.443976       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:05:22.775230       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:05:27.523256       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:06:45.891134       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:06:49.834348       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:07:23.347567       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 11:07:59.884579       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:08:09.440904       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [b78af3703b19] <==
	I0929 10:57:26.727219       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0929 10:57:26.727279       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0929 10:57:26.728038       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0929 10:57:26.728325       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0929 10:57:26.729228       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0929 10:57:26.729271       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0929 10:57:26.732345       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 10:57:26.734684       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0929 10:57:26.735866       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 10:57:26.739928       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0929 10:57:26.742296       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0929 10:57:26.742395       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 10:57:26.742501       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-452443"
	I0929 10:57:26.742552       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 10:57:26.744811       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0929 10:57:26.747472       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0929 10:57:26.750035       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 10:57:26.755446       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0929 10:58:00.364718       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:58:00.371620       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:58:00.374614       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:58:00.376184       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:58:00.379516       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:58:00.388773       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:58:00.389833       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [f3cc3bf7eecf] <==
	I0929 10:56:27.853499       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0929 10:56:27.855898       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0929 10:56:27.858171       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0929 10:56:27.860467       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 10:56:27.882000       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 10:56:27.882054       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0929 10:56:27.882108       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0929 10:56:27.882136       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0929 10:56:27.882139       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0929 10:56:27.882252       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 10:56:27.882285       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0929 10:56:27.882348       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0929 10:56:27.882410       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0929 10:56:27.882582       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 10:56:27.882699       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-452443"
	I0929 10:56:27.882782       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 10:56:27.884650       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0929 10:56:27.886398       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0929 10:56:27.888962       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 10:56:27.889215       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 10:56:27.892353       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 10:56:27.913736       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 10:56:27.913764       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 10:56:27.913765       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 10:56:27.913774       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [43f526c4f63e] <==
	I0929 10:57:24.726005       1 server_linux.go:53] "Using iptables proxy"
	I0929 10:57:24.788536       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 10:57:24.889125       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 10:57:24.889190       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 10:57:24.889318       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 10:57:24.919481       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 10:57:24.919547       1 server_linux.go:132] "Using iptables Proxier"
	I0929 10:57:24.927324       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 10:57:24.928652       1 server.go:527] "Version info" version="v1.34.0"
	I0929 10:57:24.929506       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:57:24.932342       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 10:57:24.932363       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 10:57:24.932388       1 config.go:200] "Starting service config controller"
	I0929 10:57:24.932395       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 10:57:24.932439       1 config.go:106] "Starting endpoint slice config controller"
	I0929 10:57:24.932445       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 10:57:24.932488       1 config.go:309] "Starting node config controller"
	I0929 10:57:24.932507       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 10:57:25.032561       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 10:57:25.032594       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 10:57:25.032610       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 10:57:25.032610       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [50fd35bd47b6] <==
	I0929 10:57:18.304680       1 server_linux.go:53] "Using iptables proxy"
	I0929 10:57:18.365710       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0929 10:57:18.366737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-452443&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-scheduler [0ebdc90c4788] <==
	I0929 10:57:18.942790       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [992f09f32636] <==
	I0929 10:57:21.365688       1 serving.go:386] Generated self-signed cert in-memory
	W0929 10:57:23.324838       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 10:57:23.324961       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 10:57:23.324996       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 10:57:23.325042       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 10:57:23.350031       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 10:57:23.350072       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:57:23.352236       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:57:23.352279       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:57:23.352567       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 10:57:23.352863       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 10:57:23.453195       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 11:07:15 functional-452443 kubelet[8829]: E0929 11:07:15.262077    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="7db0e382-55c4-415d-8eab-de12640ee91a"
	Sep 29 11:07:15 functional-452443 kubelet[8829]: E0929 11:07:15.262087    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-2cf6d" podUID="e461b8e2-b1f4-4815-88e2-19e37959ef47"
	Sep 29 11:07:17 functional-452443 kubelet[8829]: E0929 11:07:17.262768    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wsddv" podUID="a3e360df-cf29-4d99-91d6-aaae27aa6cfe"
	Sep 29 11:07:27 functional-452443 kubelet[8829]: E0929 11:07:27.262301    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-2cf6d" podUID="e461b8e2-b1f4-4815-88e2-19e37959ef47"
	Sep 29 11:07:27 functional-452443 kubelet[8829]: E0929 11:07:27.262359    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-vwljh" podUID="7b01fd21-a3ab-4ad8-a320-deba03206503"
	Sep 29 11:07:28 functional-452443 kubelet[8829]: E0929 11:07:28.260831    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="89090c16-9901-49ad-961f-7eccda79d3e3"
	Sep 29 11:07:28 functional-452443 kubelet[8829]: E0929 11:07:28.262545    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="7db0e382-55c4-415d-8eab-de12640ee91a"
	Sep 29 11:07:29 functional-452443 kubelet[8829]: E0929 11:07:29.262465    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wsddv" podUID="a3e360df-cf29-4d99-91d6-aaae27aa6cfe"
	Sep 29 11:07:40 functional-452443 kubelet[8829]: E0929 11:07:40.263096    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-vwljh" podUID="7b01fd21-a3ab-4ad8-a320-deba03206503"
	Sep 29 11:07:40 functional-452443 kubelet[8829]: E0929 11:07:40.263121    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-2cf6d" podUID="e461b8e2-b1f4-4815-88e2-19e37959ef47"
	Sep 29 11:07:40 functional-452443 kubelet[8829]: E0929 11:07:40.263115    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wsddv" podUID="a3e360df-cf29-4d99-91d6-aaae27aa6cfe"
	Sep 29 11:07:40 functional-452443 kubelet[8829]: E0929 11:07:40.263186    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="7db0e382-55c4-415d-8eab-de12640ee91a"
	Sep 29 11:07:43 functional-452443 kubelet[8829]: E0929 11:07:43.260952    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="89090c16-9901-49ad-961f-7eccda79d3e3"
	Sep 29 11:07:51 functional-452443 kubelet[8829]: E0929 11:07:51.262500    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-2cf6d" podUID="e461b8e2-b1f4-4815-88e2-19e37959ef47"
	Sep 29 11:07:52 functional-452443 kubelet[8829]: E0929 11:07:52.262172    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="7db0e382-55c4-415d-8eab-de12640ee91a"
	Sep 29 11:07:52 functional-452443 kubelet[8829]: E0929 11:07:52.262260    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-vwljh" podUID="7b01fd21-a3ab-4ad8-a320-deba03206503"
	Sep 29 11:07:52 functional-452443 kubelet[8829]: E0929 11:07:52.262262    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wsddv" podUID="a3e360df-cf29-4d99-91d6-aaae27aa6cfe"
	Sep 29 11:07:58 functional-452443 kubelet[8829]: E0929 11:07:58.261137    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="89090c16-9901-49ad-961f-7eccda79d3e3"
	Sep 29 11:08:02 functional-452443 kubelet[8829]: E0929 11:08:02.262048    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-2cf6d" podUID="e461b8e2-b1f4-4815-88e2-19e37959ef47"
	Sep 29 11:08:06 functional-452443 kubelet[8829]: E0929 11:08:06.262027    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="7db0e382-55c4-415d-8eab-de12640ee91a"
	Sep 29 11:08:06 functional-452443 kubelet[8829]: E0929 11:08:06.262065    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wsddv" podUID="a3e360df-cf29-4d99-91d6-aaae27aa6cfe"
	Sep 29 11:08:06 functional-452443 kubelet[8829]: E0929 11:08:06.262223    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-vwljh" podUID="7b01fd21-a3ab-4ad8-a320-deba03206503"
	Sep 29 11:08:11 functional-452443 kubelet[8829]: E0929 11:08:11.260575    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="89090c16-9901-49ad-961f-7eccda79d3e3"
	Sep 29 11:08:15 functional-452443 kubelet[8829]: E0929 11:08:15.263461    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-2cf6d" podUID="e461b8e2-b1f4-4815-88e2-19e37959ef47"
	Sep 29 11:08:17 functional-452443 kubelet[8829]: E0929 11:08:17.262125    8829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wsddv" podUID="a3e360df-cf29-4d99-91d6-aaae27aa6cfe"
	
	
	==> storage-provisioner [03abdd4cbaff] <==
	W0929 11:07:54.712885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:07:56.716314       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:07:56.721720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:07:58.725230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:07:58.730658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:08:00.733888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:08:00.738333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:08:02.741410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:08:02.746115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:08:04.749667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:08:04.754995       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:08:06.758537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:08:06.762716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:08:08.765720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:08:08.769838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:08:10.773411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:08:10.778758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:08:12.782730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:08:12.787441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:08:14.790738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:08:14.795273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:08:16.797975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:08:16.801972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:08:18.805482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:08:18.809943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [294c14d82502] <==
	I0929 10:56:45.320881       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4a43b7f2-61d2-46e8-9782-7b3e373f54d4", APIVersion:"v1", ResourceVersion:"558", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-452443_682bbbcc-ebb4-4ad2-8adb-3ba01048abeb became leader
	I0929 10:56:45.320924       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-452443_682bbbcc-ebb4-4ad2-8adb-3ba01048abeb!
	W0929 10:56:45.323253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:45.327714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0929 10:56:45.421684       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-452443_682bbbcc-ebb4-4ad2-8adb-3ba01048abeb!
	W0929 10:56:47.330849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:47.335388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:49.339584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:49.345321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:51.348084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:51.352718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:53.356066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:53.360167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:55.363307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:55.367705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:57.370570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:57.375034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:59.378481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:56:59.383711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:57:01.386943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:57:01.391270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:57:03.394081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:57:03.398268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:57:05.402241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:57:05.406289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-452443 -n functional-452443
helpers_test.go:269: (dbg) Run:  kubectl --context functional-452443 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-5bb876957f-vwljh nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-2cf6d kubernetes-dashboard-855c9754f9-wsddv
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-452443 describe pod busybox-mount mysql-5bb876957f-vwljh nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-2cf6d kubernetes-dashboard-855c9754f9-wsddv
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-452443 describe pod busybox-mount mysql-5bb876957f-vwljh nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-2cf6d kubernetes-dashboard-855c9754f9-wsddv: exit status 1 (95.798737ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-452443/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 10:57:57 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://2820261789a088603ce8063bfe2c7aaf645df80c99e482409fafdaac2ce2bdd1
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 29 Sep 2025 10:58:00 +0000
	      Finished:     Mon, 29 Sep 2025 10:58:00 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xpfzv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-xpfzv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-452443
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.873s (1.873s including waiting). Image size: 4403845 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             mysql-5bb876957f-vwljh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-452443/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 10:58:17 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.14
	IPs:
	  IP:           10.244.0.14
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f9cxl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-f9cxl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/mysql-5bb876957f-vwljh to functional-452443
	  Normal   Pulling    7m9s (x5 over 10m)      kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     7m8s (x5 over 10m)      kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m8s (x5 over 10m)      kubelet            Error: ErrImagePull
	  Warning  Failed     4m51s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m37s (x21 over 9m59s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-452443/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 10:57:44 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4pqtp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-4pqtp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/nginx-svc to functional-452443
	  Warning  Failed     9m7s (x3 over 10m)   kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    7m47s (x5 over 10m)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     7m46s (x2 over 10m)  kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m46s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    27s (x42 over 10m)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     27s (x42 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-452443/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 10:57:50 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xcvcq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-xcvcq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/sp-pod to functional-452443
	  Normal   Pulling    7m18s (x5 over 10m)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     7m17s (x5 over 10m)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m17s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    21s (x41 over 10m)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     21s (x41 over 10m)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-2cf6d" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-wsddv" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-452443 describe pod busybox-mount mysql-5bb876957f-vwljh nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-2cf6d kubernetes-dashboard-855c9754f9-wsddv: exit status 1
--- FAIL: TestFunctional/parallel/MySQL (602.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-452443 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [7db0e382-55c4-415d-8eab-de12640ee91a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:337: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: WARNING: pod list for "default" "run=nginx-svc" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-452443 -n functional-452443
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2025-09-29 11:01:45.008539127 +0000 UTC m=+1558.844457937
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-452443 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-452443 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-452443/192.168.49.2
Start Time:       Mon, 29 Sep 2025 10:57:44 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:  10.244.0.8
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4pqtp (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-4pqtp:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  4m1s                   default-scheduler  Successfully assigned default/nginx-svc to functional-452443
Warning  Failed     2m33s (x3 over 3m41s)  kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    73s (x5 over 4m)       kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     72s (x2 over 3m57s)    kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     72s (x5 over 3m57s)    kubelet            Error: ErrImagePull
Normal   BackOff    11s (x14 over 3m57s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     11s (x14 over 3m57s)   kubelet            Error: ImagePullBackOff
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-452443 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-452443 logs nginx-svc -n default: exit status 1 (71.144948ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-452443 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.72s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (109.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I0929 11:01:45.151372  807725 retry.go:31] will retry after 4.245152351s: Temporary Error: Get "http:": http: no Host in request URL
I0929 11:01:49.396756  807725 retry.go:31] will retry after 2.443784604s: Temporary Error: Get "http:": http: no Host in request URL
I0929 11:01:51.841323  807725 retry.go:31] will retry after 9.402320488s: Temporary Error: Get "http:": http: no Host in request URL
I0929 11:02:01.244670  807725 retry.go:31] will retry after 13.274190384s: Temporary Error: Get "http:": http: no Host in request URL
I0929 11:02:14.519516  807725 retry.go:31] will retry after 12.458376528s: Temporary Error: Get "http:": http: no Host in request URL
I0929 11:02:26.978949  807725 retry.go:31] will retry after 26.940225031s: Temporary Error: Get "http:": http: no Host in request URL
I0929 11:02:53.919398  807725 retry.go:31] will retry after 40.47764053s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-452443 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
nginx-svc   LoadBalancer   10.107.3.82   10.107.3.82   80:31335/TCP   5m50s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (109.31s)

                                                
                                    

Test pass (314/346)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 12.26
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.0/json-events 10.58
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.07
18 TestDownloadOnly/v1.34.0/DeleteAll 0.24
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.15
20 TestDownloadOnlyKic 1.1
21 TestBinaryMirror 0.9
22 TestOffline 85.17
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 146.04
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 9.52
35 TestAddons/parallel/Registry 14.77
36 TestAddons/parallel/RegistryCreds 0.56
38 TestAddons/parallel/InspektorGadget 5.21
39 TestAddons/parallel/MetricsServer 5.74
42 TestAddons/parallel/Headlamp 19.54
43 TestAddons/parallel/CloudSpanner 5.46
45 TestAddons/parallel/NvidiaDevicePlugin 6.44
47 TestAddons/parallel/AmdGpuDevicePlugin 5.45
48 TestAddons/StoppedEnableDisable 11.2
49 TestCertOptions 26.76
50 TestCertExpiration 246.09
51 TestDockerFlags 27.5
52 TestForceSystemdFlag 28.57
53 TestForceSystemdEnv 31.99
55 TestKVMDriverInstallOrUpdate 1.07
59 TestErrorSpam/setup 22.19
60 TestErrorSpam/start 0.64
61 TestErrorSpam/status 0.94
62 TestErrorSpam/pause 1.19
63 TestErrorSpam/unpause 1.27
64 TestErrorSpam/stop 10.95
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 61.1
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 58.37
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.06
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.53
76 TestFunctional/serial/CacheCmd/cache/add_local 1.43
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.42
81 TestFunctional/serial/CacheCmd/cache/delete 0.11
82 TestFunctional/serial/MinikubeKubectlCmd 0.12
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 49.55
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.07
87 TestFunctional/serial/LogsFileCmd 1.08
88 TestFunctional/serial/InvalidService 4.22
90 TestFunctional/parallel/ConfigCmd 0.43
92 TestFunctional/parallel/DryRun 0.37
93 TestFunctional/parallel/InternationalLanguage 0.16
94 TestFunctional/parallel/StatusCmd 0.96
98 TestFunctional/parallel/ServiceCmdConnect 10.54
99 TestFunctional/parallel/AddonsCmd 0.16
102 TestFunctional/parallel/SSHCmd 0.67
103 TestFunctional/parallel/CpCmd 1.87
105 TestFunctional/parallel/FileSync 0.27
106 TestFunctional/parallel/CertSync 1.64
110 TestFunctional/parallel/NodeLabels 0.06
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.33
114 TestFunctional/parallel/License 0.41
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.45
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
120 TestFunctional/parallel/ServiceCmd/DeployApp 10.15
121 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
122 TestFunctional/parallel/ProfileCmd/profile_list 0.41
123 TestFunctional/parallel/ServiceCmd/List 0.52
124 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
125 TestFunctional/parallel/ServiceCmd/JSONOutput 0.53
126 TestFunctional/parallel/MountCmd/any-port 7.61
127 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
128 TestFunctional/parallel/ServiceCmd/Format 0.38
129 TestFunctional/parallel/ServiceCmd/URL 0.38
130 TestFunctional/parallel/MountCmd/specific-port 1.74
131 TestFunctional/parallel/MountCmd/VerifyCleanup 1.87
132 TestFunctional/parallel/DockerEnv/bash 0.97
133 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
134 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
135 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
140 TestFunctional/parallel/ImageCommands/ImageBuild 3.11
141 TestFunctional/parallel/ImageCommands/Setup 1.72
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.89
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.77
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.59
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.32
146 TestFunctional/parallel/ImageCommands/ImageRemove 0.42
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.56
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.37
149 TestFunctional/parallel/Version/short 0.06
150 TestFunctional/parallel/Version/components 0.51
155 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 100.14
164 TestMultiControlPlane/serial/DeployApp 54.74
165 TestMultiControlPlane/serial/PingHostFromPods 1.11
166 TestMultiControlPlane/serial/AddWorkerNode 14.29
167 TestMultiControlPlane/serial/NodeLabels 0.07
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.87
169 TestMultiControlPlane/serial/CopyFile 16.74
170 TestMultiControlPlane/serial/StopSecondaryNode 11.42
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.7
172 TestMultiControlPlane/serial/RestartSecondaryNode 37.37
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.87
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 163.68
175 TestMultiControlPlane/serial/DeleteSecondaryNode 9.13
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.68
177 TestMultiControlPlane/serial/StopCluster 32.22
178 TestMultiControlPlane/serial/RestartCluster 101.69
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.68
180 TestMultiControlPlane/serial/AddSecondaryNode 25.82
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.99
184 TestImageBuild/serial/Setup 24.16
185 TestImageBuild/serial/NormalBuild 1.16
186 TestImageBuild/serial/BuildWithBuildArg 0.71
187 TestImageBuild/serial/BuildWithDockerIgnore 0.51
188 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.53
192 TestJSONOutput/start/Command 66.13
193 TestJSONOutput/start/Audit 0
195 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/pause/Command 0.49
199 TestJSONOutput/pause/Audit 0
201 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
204 TestJSONOutput/unpause/Command 0.45
205 TestJSONOutput/unpause/Audit 0
207 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
208 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
210 TestJSONOutput/stop/Command 10.8
211 TestJSONOutput/stop/Audit 0
213 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
214 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
215 TestErrorJSONOutput 0.23
217 TestKicCustomNetwork/create_custom_network 25.25
218 TestKicCustomNetwork/use_default_bridge_network 23.98
219 TestKicExistingNetwork 24.67
220 TestKicCustomSubnet 25.28
221 TestKicStaticIP 25.51
222 TestMainNoArgs 0.05
223 TestMinikubeProfile 51.24
226 TestMountStart/serial/StartWithMountFirst 8.23
227 TestMountStart/serial/VerifyMountFirst 0.25
228 TestMountStart/serial/StartWithMountSecond 8.73
229 TestMountStart/serial/VerifyMountSecond 0.27
230 TestMountStart/serial/DeleteFirst 1.53
231 TestMountStart/serial/VerifyMountPostDelete 0.27
232 TestMountStart/serial/Stop 1.19
233 TestMountStart/serial/RestartStopped 9.4
234 TestMountStart/serial/VerifyMountPostStop 0.27
237 TestMultiNode/serial/FreshStart2Nodes 57.25
238 TestMultiNode/serial/DeployApp2Nodes 44.26
239 TestMultiNode/serial/PingHostFrom2Pods 0.82
240 TestMultiNode/serial/AddNode 14.03
241 TestMultiNode/serial/MultiNodeLabels 0.06
242 TestMultiNode/serial/ProfileList 0.71
243 TestMultiNode/serial/CopyFile 10.2
244 TestMultiNode/serial/StopNode 2.23
245 TestMultiNode/serial/StartAfterStop 8.83
246 TestMultiNode/serial/RestartKeepsNodes 70.1
247 TestMultiNode/serial/DeleteNode 5.3
248 TestMultiNode/serial/StopMultiNode 21.63
249 TestMultiNode/serial/RestartMultiNode 46.41
250 TestMultiNode/serial/ValidateNameConflict 26.37
255 TestPreload 141.92
257 TestScheduledStopUnix 95.03
258 TestSkaffold 83.54
260 TestInsufficientStorage 9.87
261 TestRunningBinaryUpgrade 60.01
263 TestKubernetesUpgrade 354.19
264 TestMissingContainerUpgrade 98.4
266 TestStoppedBinaryUpgrade/Setup 2.66
267 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
268 TestNoKubernetes/serial/StartWithK8s 40.22
269 TestStoppedBinaryUpgrade/Upgrade 74.18
270 TestNoKubernetes/serial/StartWithStopK8s 18.32
271 TestNoKubernetes/serial/Start 7.11
272 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
273 TestNoKubernetes/serial/ProfileList 1.49
274 TestNoKubernetes/serial/Stop 1.21
275 TestNoKubernetes/serial/StartNoArgs 9.08
276 TestStoppedBinaryUpgrade/MinikubeLogs 1.01
277 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.33
286 TestPause/serial/Start 66.38
298 TestPause/serial/SecondStartNoReconfiguration 36.27
299 TestPause/serial/Pause 0.51
300 TestPause/serial/VerifyStatus 0.33
301 TestPause/serial/Unpause 0.51
302 TestPause/serial/PauseAgain 0.6
303 TestPause/serial/DeletePaused 2.2
304 TestPause/serial/VerifyDeletedResources 0.65
306 TestStartStop/group/old-k8s-version/serial/FirstStart 79.95
308 TestStartStop/group/no-preload/serial/FirstStart 72.99
309 TestStartStop/group/old-k8s-version/serial/DeployApp 10.28
310 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.92
311 TestStartStop/group/old-k8s-version/serial/Stop 10.87
312 TestStartStop/group/no-preload/serial/DeployApp 10.25
313 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
314 TestStartStop/group/old-k8s-version/serial/SecondStart 25.77
315 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.89
316 TestStartStop/group/no-preload/serial/Stop 10.85
317 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
318 TestStartStop/group/no-preload/serial/SecondStart 51
319 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 12.01
320 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
321 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
322 TestStartStop/group/old-k8s-version/serial/Pause 2.36
324 TestStartStop/group/embed-certs/serial/FirstStart 67
325 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
326 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
327 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
328 TestStartStop/group/no-preload/serial/Pause 2.73
330 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 72.61
332 TestStartStop/group/newest-cni/serial/FirstStart 30.69
333 TestStartStop/group/embed-certs/serial/DeployApp 9.24
334 TestStartStop/group/newest-cni/serial/DeployApp 0
335 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.78
336 TestStartStop/group/newest-cni/serial/Stop 10.91
337 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.81
338 TestStartStop/group/embed-certs/serial/Stop 10.86
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.25
340 TestStartStop/group/newest-cni/serial/SecondStart 17.25
341 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
342 TestStartStop/group/embed-certs/serial/SecondStart 50.76
343 TestNetworkPlugins/group/auto/Start 72.98
344 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
347 TestStartStop/group/newest-cni/serial/Pause 2.55
348 TestNetworkPlugins/group/kindnet/Start 59.64
349 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 12.35
350 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.87
351 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.97
352 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
353 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 48.4
354 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
355 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.09
356 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
357 TestStartStop/group/embed-certs/serial/Pause 2.58
358 TestNetworkPlugins/group/calico/Start 53.53
359 TestNetworkPlugins/group/auto/KubeletFlags 0.38
360 TestNetworkPlugins/group/auto/NetCatPod 9.21
361 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
362 TestNetworkPlugins/group/auto/DNS 0.15
363 TestNetworkPlugins/group/auto/Localhost 0.13
364 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
365 TestNetworkPlugins/group/auto/HairPin 0.14
366 TestNetworkPlugins/group/kindnet/NetCatPod 9.22
367 TestNetworkPlugins/group/kindnet/DNS 0.21
368 TestNetworkPlugins/group/kindnet/Localhost 0.21
369 TestNetworkPlugins/group/kindnet/HairPin 0.16
370 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
371 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.13
372 TestNetworkPlugins/group/custom-flannel/Start 51.3
373 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.32
374 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.77
375 TestNetworkPlugins/group/false/Start 69.07
376 TestNetworkPlugins/group/enable-default-cni/Start 71.18
377 TestNetworkPlugins/group/calico/ControllerPod 6.01
378 TestNetworkPlugins/group/calico/KubeletFlags 0.29
379 TestNetworkPlugins/group/calico/NetCatPod 10.22
380 TestNetworkPlugins/group/calico/DNS 0.18
381 TestNetworkPlugins/group/calico/Localhost 0.14
382 TestNetworkPlugins/group/calico/HairPin 0.14
383 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
384 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.21
385 TestNetworkPlugins/group/flannel/Start 44.91
386 TestNetworkPlugins/group/custom-flannel/DNS 0.15
387 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
388 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
389 TestNetworkPlugins/group/false/KubeletFlags 0.33
390 TestNetworkPlugins/group/false/NetCatPod 9.26
391 TestNetworkPlugins/group/bridge/Start 41.36
392 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
393 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.22
394 TestNetworkPlugins/group/false/DNS 0.15
395 TestNetworkPlugins/group/false/Localhost 0.13
396 TestNetworkPlugins/group/false/HairPin 0.19
397 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
398 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
399 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
400 TestNetworkPlugins/group/flannel/ControllerPod 6.01
401 TestNetworkPlugins/group/flannel/KubeletFlags 0.35
402 TestNetworkPlugins/group/flannel/NetCatPod 10.3
403 TestNetworkPlugins/group/kubenet/Start 71.84
404 TestNetworkPlugins/group/flannel/DNS 0.2
405 TestNetworkPlugins/group/flannel/Localhost 0.13
406 TestNetworkPlugins/group/flannel/HairPin 0.13
407 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
408 TestNetworkPlugins/group/bridge/NetCatPod 9.21
409 TestNetworkPlugins/group/bridge/DNS 0.15
410 TestNetworkPlugins/group/bridge/Localhost 0.12
411 TestNetworkPlugins/group/bridge/HairPin 0.12
412 TestNetworkPlugins/group/kubenet/KubeletFlags 0.28
413 TestNetworkPlugins/group/kubenet/NetCatPod 10.18
414 TestNetworkPlugins/group/kubenet/DNS 0.15
415 TestNetworkPlugins/group/kubenet/Localhost 0.12
416 TestNetworkPlugins/group/kubenet/HairPin 0.12
x
+
TestDownloadOnly/v1.28.0/json-events (12.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-079557 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-079557 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (12.258048588s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (12.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0929 10:35:58.464695  807725 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime docker
I0929 10:35:58.464795  807725 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21656-804126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-079557
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-079557: exit status 85 (68.086573ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-079557 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-079557 │ jenkins │ v1.37.0 │ 29 Sep 25 10:35 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 10:35:46
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 10:35:46.250974  807737 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:35:46.251270  807737 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:35:46.251281  807737 out.go:374] Setting ErrFile to fd 2...
	I0929 10:35:46.251286  807737 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:35:46.251556  807737 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-804126/.minikube/bin
	W0929 10:35:46.251741  807737 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21656-804126/.minikube/config/config.json: open /home/jenkins/minikube-integration/21656-804126/.minikube/config/config.json: no such file or directory
	I0929 10:35:46.252345  807737 out.go:368] Setting JSON to true
	I0929 10:35:46.253399  807737 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":8287,"bootTime":1759133859,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:35:46.253542  807737 start.go:140] virtualization: kvm guest
	I0929 10:35:46.255876  807737 out.go:99] [download-only-079557] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 10:35:46.256070  807737 notify.go:220] Checking for updates...
	W0929 10:35:46.256106  807737 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21656-804126/.minikube/cache/preloaded-tarball: no such file or directory
	I0929 10:35:46.257633  807737 out.go:171] MINIKUBE_LOCATION=21656
	I0929 10:35:46.258866  807737 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:35:46.260171  807737 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21656-804126/kubeconfig
	I0929 10:35:46.261293  807737 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-804126/.minikube
	I0929 10:35:46.262536  807737 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0929 10:35:46.264630  807737 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0929 10:35:46.264973  807737 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:35:46.289580  807737 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 10:35:46.289720  807737 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:35:46.346807  807737 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-09-29 10:35:46.336044396 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:35:46.347017  807737 docker.go:318] overlay module found
	I0929 10:35:46.348631  807737 out.go:99] Using the docker driver based on user configuration
	I0929 10:35:46.348665  807737 start.go:304] selected driver: docker
	I0929 10:35:46.348675  807737 start.go:924] validating driver "docker" against <nil>
	I0929 10:35:46.348821  807737 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:35:46.403708  807737 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-09-29 10:35:46.393971989 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:35:46.403881  807737 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 10:35:46.404495  807737 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0929 10:35:46.404674  807737 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0929 10:35:46.406486  807737 out.go:171] Using Docker driver with root privileges
	I0929 10:35:46.407643  807737 cni.go:84] Creating CNI manager for ""
	I0929 10:35:46.407719  807737 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 10:35:46.407732  807737 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0929 10:35:46.407834  807737 start.go:348] cluster config:
	{Name:download-only-079557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-079557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:35:46.409100  807737 out.go:99] Starting "download-only-079557" primary control-plane node in "download-only-079557" cluster
	I0929 10:35:46.409140  807737 cache.go:123] Beginning downloading kic base image for docker with docker
	I0929 10:35:46.410261  807737 out.go:99] Pulling base image v0.0.48 ...
	I0929 10:35:46.410293  807737 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0929 10:35:46.410413  807737 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 10:35:46.428953  807737 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 10:35:46.429169  807737 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0929 10:35:46.429286  807737 image.go:150] Writing gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 10:35:46.757864  807737 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I0929 10:35:46.757897  807737 cache.go:58] Caching tarball of preloaded images
	I0929 10:35:46.758071  807737 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0929 10:35:46.759772  807737 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0929 10:35:46.759791  807737 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4 ...
	I0929 10:35:46.856220  807737 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4?checksum=md5:8a955be835827bc584bcce0658a7fcc9 -> /home/jenkins/minikube-integration/21656-804126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I0929 10:35:50.527821  807737 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	
	
	* The control-plane node download-only-079557 host does not exist
	  To start a cluster, run: "minikube start -p download-only-079557"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-079557
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (10.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-239254 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-239254 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (10.575268958s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (10.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0929 10:36:09.486934  807725 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
I0929 10:36:09.487013  807725 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21656-804126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-239254
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-239254: exit status 85 (67.309178ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-079557 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-079557 │ jenkins │ v1.37.0 │ 29 Sep 25 10:35 UTC │                     │
	│ delete  │ --all                                                                                                                                                                         │ minikube             │ jenkins │ v1.37.0 │ 29 Sep 25 10:35 UTC │ 29 Sep 25 10:35 UTC │
	│ delete  │ -p download-only-079557                                                                                                                                                       │ download-only-079557 │ jenkins │ v1.37.0 │ 29 Sep 25 10:35 UTC │ 29 Sep 25 10:35 UTC │
	│ start   │ -o=json --download-only -p download-only-239254 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-239254 │ jenkins │ v1.37.0 │ 29 Sep 25 10:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 10:35:58
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 10:35:58.958449  808115 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:35:58.958570  808115 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:35:58.958580  808115 out.go:374] Setting ErrFile to fd 2...
	I0929 10:35:58.958585  808115 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:35:58.958783  808115 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-804126/.minikube/bin
	I0929 10:35:58.959277  808115 out.go:368] Setting JSON to true
	I0929 10:35:58.960184  808115 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":8300,"bootTime":1759133859,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:35:58.960298  808115 start.go:140] virtualization: kvm guest
	I0929 10:35:58.962156  808115 out.go:99] [download-only-239254] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 10:35:58.962320  808115 notify.go:220] Checking for updates...
	I0929 10:35:58.963432  808115 out.go:171] MINIKUBE_LOCATION=21656
	I0929 10:35:58.964653  808115 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:35:58.965925  808115 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21656-804126/kubeconfig
	I0929 10:35:58.967188  808115 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-804126/.minikube
	I0929 10:35:58.968237  808115 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0929 10:35:58.970125  808115 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0929 10:35:58.970433  808115 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:35:58.994366  808115 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 10:35:58.994493  808115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:35:59.051414  808115 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-29 10:35:59.040917959 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:35:59.051583  808115 docker.go:318] overlay module found
	I0929 10:35:59.053215  808115 out.go:99] Using the docker driver based on user configuration
	I0929 10:35:59.053249  808115 start.go:304] selected driver: docker
	I0929 10:35:59.053255  808115 start.go:924] validating driver "docker" against <nil>
	I0929 10:35:59.053341  808115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:35:59.107928  808115 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-29 10:35:59.09780106 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:35:59.108129  808115 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 10:35:59.108888  808115 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0929 10:35:59.109125  808115 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0929 10:35:59.110775  808115 out.go:171] Using Docker driver with root privileges
	I0929 10:35:59.111934  808115 cni.go:84] Creating CNI manager for ""
	I0929 10:35:59.112011  808115 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 10:35:59.112025  808115 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0929 10:35:59.112110  808115 start.go:348] cluster config:
	{Name:download-only-239254 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:download-only-239254 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:35:59.113322  808115 out.go:99] Starting "download-only-239254" primary control-plane node in "download-only-239254" cluster
	I0929 10:35:59.113347  808115 cache.go:123] Beginning downloading kic base image for docker with docker
	I0929 10:35:59.114569  808115 out.go:99] Pulling base image v0.0.48 ...
	I0929 10:35:59.114599  808115 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 10:35:59.114742  808115 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 10:35:59.132895  808115 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 10:35:59.133076  808115 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0929 10:35:59.133094  808115 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0929 10:35:59.133101  808115 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0929 10:35:59.133111  808115 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0929 10:35:59.456005  808115 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0929 10:35:59.456042  808115 cache.go:58] Caching tarball of preloaded images
	I0929 10:35:59.456206  808115 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 10:35:59.457865  808115 out.go:99] Downloading Kubernetes v1.34.0 preload ...
	I0929 10:35:59.457885  808115 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 ...
	I0929 10:35:59.560790  808115 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4?checksum=md5:994a4de1464928e89c992dfd0a962e35 -> /home/jenkins/minikube-integration/21656-804126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0929 10:36:07.840163  808115 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 ...
	I0929 10:36:07.840291  808115 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21656-804126/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 ...
	I0929 10:36:08.540339  808115 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0929 10:36:08.540753  808115 profile.go:143] Saving config to /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/download-only-239254/config.json ...
	I0929 10:36:08.540798  808115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/download-only-239254/config.json: {Name:mk948d776cf990c6c5aa12817bcf62d2260e2637 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:36:08.541019  808115 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 10:36:08.541227  808115 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21656-804126/.minikube/cache/linux/amd64/v1.34.0/kubectl
	
	
	* The control-plane node download-only-239254 host does not exist
	  To start a cluster, run: "minikube start -p download-only-239254"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-239254
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.1s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-537454 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-537454" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-537454
--- PASS: TestDownloadOnlyKic (1.10s)

                                                
                                    
x
+
TestBinaryMirror (0.9s)

                                                
                                                
=== RUN   TestBinaryMirror
I0929 10:36:11.334313  807725 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-606832 --alsologtostderr --binary-mirror http://127.0.0.1:41679 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-606832" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-606832
--- PASS: TestBinaryMirror (0.90s)

                                                
                                    
x
+
TestOffline (85.17s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-857429 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-857429 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m20.242557814s)
helpers_test.go:175: Cleaning up "offline-docker-857429" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-857429
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-857429: (4.925859156s)
--- PASS: TestOffline (85.17s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-529420
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-529420: exit status 85 (60.618542ms)

                                                
                                                
-- stdout --
	* Profile "addons-529420" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-529420"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-529420
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-529420: exit status 85 (60.558106ms)

                                                
                                                
-- stdout --
	* Profile "addons-529420" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-529420"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (146.04s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-529420 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-529420 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m26.039304288s)
--- PASS: TestAddons/Setup (146.04s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-529420 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-529420 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.52s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-529420 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-529420 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d03cac9c-bafe-48cd-b5c0-88448b6532ad] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d03cac9c-bafe-48cd-b5c0-88448b6532ad] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004639935s
addons_test.go:694: (dbg) Run:  kubectl --context addons-529420 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-529420 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-529420 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.52s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 4.29302ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-9545s" [6b827e39-0a4c-4539-b408-9bfd1cdda6c1] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003613972s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-b2bpz" [87ba9cf0-8f09-4423-b674-f77637763bcc] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003296182s
addons_test.go:392: (dbg) Run:  kubectl --context addons-529420 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-529420 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-529420 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.975967432s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-529420 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-529420 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.77s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.56s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.108757ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-529420
addons_test.go:332: (dbg) Run:  kubectl --context addons-529420 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-529420 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.56s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.21s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-8bvtc" [eb5bc70d-21cf-4b9a-be1c-0ff35a3bf296] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003133875s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-529420 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.21s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.74s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 4.148559ms
I0929 10:45:10.560190  807725 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0929 10:45:10.560219  807725 kapi.go:107] duration metric: took 4.333846ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-6kvbb" [2261b64a-3e18-4fca-8fb4-4a870b5d302f] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003716851s
addons_test.go:463: (dbg) Run:  kubectl --context addons-529420 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-529420 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.74s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-529420 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-nfwfq" [61ea7656-fe4e-4260-9e2c-eb2261a72a85] Pending
helpers_test.go:352: "headlamp-85f8f8dc54-nfwfq" [61ea7656-fe4e-4260-9e2c-eb2261a72a85] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-nfwfq" [61ea7656-fe4e-4260-9e2c-eb2261a72a85] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.003049236s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-529420 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-529420 addons disable headlamp --alsologtostderr -v=1: (5.750130718s)
--- PASS: TestAddons/parallel/Headlamp (19.54s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.46s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-4djrt" [70dba0ec-2d81-4d3d-9170-96529734c892] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003115045s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-529420 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.46s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.44s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-rv5nz" [70340435-f458-4a2a-a617-185f04893887] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004031312s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-529420 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.44s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.45s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-5pr2g" [d246c61e-5b06-4a8a-b834-a8104ed29092] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.004060121s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-529420 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.45s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.2s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-529420
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-529420: (10.930203083s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-529420
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-529420
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-529420
--- PASS: TestAddons/StoppedEnableDisable (11.20s)

                                                
                                    
x
+
TestCertOptions (26.76s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-743298 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-743298 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (23.985434085s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-743298 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-743298 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-743298 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-743298" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-743298
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-743298: (2.156845849s)
--- PASS: TestCertOptions (26.76s)

                                                
                                    
x
+
TestCertExpiration (246.09s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-832494 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-832494 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker: (26.237149341s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-832494 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker
E0929 11:40:00.859006  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/skaffold-243191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-832494 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (36.196413432s)
helpers_test.go:175: Cleaning up "cert-expiration-832494" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-832494
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-832494: (3.657612791s)
--- PASS: TestCertExpiration (246.09s)

                                                
                                    
x
+
TestDockerFlags (27.5s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-851183 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-851183 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (24.222222268s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-851183 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-851183 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-851183" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-851183
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-851183: (2.667541045s)
--- PASS: TestDockerFlags (27.50s)

                                                
                                    
x
+
TestForceSystemdFlag (28.57s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-111277 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-111277 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (25.919652249s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-111277 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-111277" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-111277
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-111277: (2.305310202s)
--- PASS: TestForceSystemdFlag (28.57s)

                                                
                                    
x
+
TestForceSystemdEnv (31.99s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-665384 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-665384 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (29.39079231s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-665384 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-665384" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-665384
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-665384: (2.182267013s)
--- PASS: TestForceSystemdEnv (31.99s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.07s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0929 11:36:14.513524  807725 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0929 11:36:14.513687  807725 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate1228217548/001:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0929 11:36:14.552052  807725 install.go:163] /tmp/TestKVMDriverInstallOrUpdate1228217548/001/docker-machine-driver-kvm2 version is 1.1.1
W0929 11:36:14.552102  807725 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W0929 11:36:14.552266  807725 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0929 11:36:14.552344  807725 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1228217548/001/docker-machine-driver-kvm2
I0929 11:36:15.349505  807725 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate1228217548/001:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0929 11:36:15.370120  807725 install.go:163] /tmp/TestKVMDriverInstallOrUpdate1228217548/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (1.07s)

                                                
                                    
x
+
TestErrorSpam/setup (22.19s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-889515 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-889515 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-889515 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-889515 --driver=docker  --container-runtime=docker: (22.193480911s)
--- PASS: TestErrorSpam/setup (22.19s)

                                                
                                    
x
+
TestErrorSpam/start (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-889515 --log_dir /tmp/nospam-889515 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-889515 --log_dir /tmp/nospam-889515 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-889515 --log_dir /tmp/nospam-889515 start --dry-run
--- PASS: TestErrorSpam/start (0.64s)

                                                
                                    
x
+
TestErrorSpam/status (0.94s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-889515 --log_dir /tmp/nospam-889515 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-889515 --log_dir /tmp/nospam-889515 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-889515 --log_dir /tmp/nospam-889515 status
--- PASS: TestErrorSpam/status (0.94s)

                                                
                                    
x
+
TestErrorSpam/pause (1.19s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-889515 --log_dir /tmp/nospam-889515 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-889515 --log_dir /tmp/nospam-889515 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-889515 --log_dir /tmp/nospam-889515 pause
--- PASS: TestErrorSpam/pause (1.19s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.27s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-889515 --log_dir /tmp/nospam-889515 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-889515 --log_dir /tmp/nospam-889515 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-889515 --log_dir /tmp/nospam-889515 unpause
--- PASS: TestErrorSpam/unpause (1.27s)

                                                
                                    
x
+
TestErrorSpam/stop (10.95s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-889515 --log_dir /tmp/nospam-889515 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-889515 --log_dir /tmp/nospam-889515 stop: (10.753775745s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-889515 --log_dir /tmp/nospam-889515 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-889515 --log_dir /tmp/nospam-889515 stop
--- PASS: TestErrorSpam/stop (10.95s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21656-804126/.minikube/files/etc/test/nested/copy/807725/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (61.1s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-452443 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-452443 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m1.100932779s)
--- PASS: TestFunctional/serial/StartWithProxy (61.10s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (58.37s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0929 10:55:43.035567  807725 config.go:182] Loaded profile config "functional-452443": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-452443 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-452443 --alsologtostderr -v=8: (58.369776249s)
functional_test.go:678: soft start took 58.370602452s for "functional-452443" cluster.
I0929 10:56:41.405804  807725 config.go:182] Loaded profile config "functional-452443": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (58.37s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-452443 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-452443 /tmp/TestFunctionalserialCacheCmdcacheadd_local572633993/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 cache add minikube-local-cache-test:functional-452443
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-452443 cache add minikube-local-cache-test:functional-452443: (1.062940044s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 cache delete minikube-local-cache-test:functional-452443
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-452443
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-452443 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (297.589822ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 kubectl -- --context functional-452443 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-452443 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (49.55s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-452443 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-452443 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (49.549532068s)
functional_test.go:776: restart took 49.549792485s for "functional-452443" cluster.
I0929 10:57:37.202994  807725 config.go:182] Loaded profile config "functional-452443": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (49.55s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-452443 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-452443 logs: (1.065763538s)
--- PASS: TestFunctional/serial/LogsCmd (1.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 logs --file /tmp/TestFunctionalserialLogsFileCmd741847855/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-452443 logs --file /tmp/TestFunctionalserialLogsFileCmd741847855/001/logs.txt: (1.081417068s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.08s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.22s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-452443 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-452443
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-452443: exit status 115 (356.108378ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30087 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-452443 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.22s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-452443 config get cpus: exit status 14 (82.438334ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-452443 config get cpus: exit status 14 (77.161174ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-452443 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-452443 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (159.076852ms)

                                                
                                                
-- stdout --
	* [functional-452443] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21656
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21656-804126/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-804126/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 10:57:59.008565  874385 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:57:59.008803  874385 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:57:59.008811  874385 out.go:374] Setting ErrFile to fd 2...
	I0929 10:57:59.008815  874385 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:57:59.009027  874385 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-804126/.minikube/bin
	I0929 10:57:59.009490  874385 out.go:368] Setting JSON to false
	I0929 10:57:59.010580  874385 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9620,"bootTime":1759133859,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:57:59.010692  874385 start.go:140] virtualization: kvm guest
	I0929 10:57:59.012480  874385 out.go:179] * [functional-452443] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 10:57:59.013698  874385 out.go:179]   - MINIKUBE_LOCATION=21656
	I0929 10:57:59.013708  874385 notify.go:220] Checking for updates...
	I0929 10:57:59.016354  874385 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:57:59.017539  874385 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21656-804126/kubeconfig
	I0929 10:57:59.018602  874385 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-804126/.minikube
	I0929 10:57:59.019689  874385 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 10:57:59.020713  874385 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 10:57:59.022314  874385 config.go:182] Loaded profile config "functional-452443": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 10:57:59.023107  874385 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:57:59.048200  874385 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 10:57:59.048315  874385 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:57:59.108440  874385 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-29 10:57:59.096452405 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:57:59.108561  874385 docker.go:318] overlay module found
	I0929 10:57:59.110307  874385 out.go:179] * Using the docker driver based on existing profile
	I0929 10:57:59.111572  874385 start.go:304] selected driver: docker
	I0929 10:57:59.111589  874385 start.go:924] validating driver "docker" against &{Name:functional-452443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-452443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:57:59.111704  874385 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 10:57:59.113704  874385 out.go:203] 
	W0929 10:57:59.114964  874385 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0929 10:57:59.116299  874385 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-452443 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-452443 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-452443 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (155.514337ms)

                                                
                                                
-- stdout --
	* [functional-452443] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21656
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21656-804126/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-804126/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 10:57:58.856631  874306 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:57:58.856747  874306 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:57:58.856757  874306 out.go:374] Setting ErrFile to fd 2...
	I0929 10:57:58.856761  874306 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:57:58.857118  874306 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-804126/.minikube/bin
	I0929 10:57:58.857640  874306 out.go:368] Setting JSON to false
	I0929 10:57:58.858808  874306 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9620,"bootTime":1759133859,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:57:58.858919  874306 start.go:140] virtualization: kvm guest
	I0929 10:57:58.861040  874306 out.go:179] * [functional-452443] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0929 10:57:58.862336  874306 notify.go:220] Checking for updates...
	I0929 10:57:58.862365  874306 out.go:179]   - MINIKUBE_LOCATION=21656
	I0929 10:57:58.863650  874306 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:57:58.865250  874306 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21656-804126/kubeconfig
	I0929 10:57:58.866539  874306 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-804126/.minikube
	I0929 10:57:58.867586  874306 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 10:57:58.868597  874306 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 10:57:58.870071  874306 config.go:182] Loaded profile config "functional-452443": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 10:57:58.870601  874306 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:57:58.894858  874306 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 10:57:58.895019  874306 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:57:58.950594  874306 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-29 10:57:58.939958275 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:57:58.950766  874306 docker.go:318] overlay module found
	I0929 10:57:58.952454  874306 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0929 10:57:58.953505  874306 start.go:304] selected driver: docker
	I0929 10:57:58.953523  874306 start.go:924] validating driver "docker" against &{Name:functional-452443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-452443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:57:58.953618  874306 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 10:57:58.955103  874306 out.go:203] 
	W0929 10:57:58.956047  874306 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0929 10:57:58.956995  874306 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-452443 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-452443 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-q79gk" [66edfc53-d45e-45af-a602-4d7ece2dde0e] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-q79gk" [66edfc53-d45e-45af-a602-4d7ece2dde0e] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004616678s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30137
functional_test.go:1680: http://192.168.49.2:30137: success! body:
Request served by hello-node-connect-7d85dfc575-q79gk

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30137
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.54s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 ssh -n functional-452443 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 cp functional-452443:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3454953591/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 ssh -n functional-452443 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 ssh -n functional-452443 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/807725/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 ssh "sudo cat /etc/test/nested/copy/807725/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/807725.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 ssh "sudo cat /etc/ssl/certs/807725.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/807725.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 ssh "sudo cat /usr/share/ca-certificates/807725.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/8077252.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 ssh "sudo cat /etc/ssl/certs/8077252.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/8077252.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 ssh "sudo cat /usr/share/ca-certificates/8077252.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-452443 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-452443 ssh "sudo systemctl is-active crio": exit status 1 (333.753025ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-452443 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-452443 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-452443 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-452443 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 871032: os: process already finished
helpers_test.go:519: unable to terminate pid 870698: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-452443 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-452443 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-452443 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-6dswg" [ccdc7f10-81e0-4791-8d3f-f2720bbdf2dc] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-6dswg" [ccdc7f10-81e0-4791-8d3f-f2720bbdf2dc] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.004354477s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "345.315451ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "61.776128ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "341.346339ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "58.356838ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 service list -o json
functional_test.go:1504: Took "533.027366ms" to run "out/minikube-linux-amd64 -p functional-452443 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-452443 /tmp/TestFunctionalparallelMountCmdany-port3519185164/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759143476225362184" to /tmp/TestFunctionalparallelMountCmdany-port3519185164/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759143476225362184" to /tmp/TestFunctionalparallelMountCmdany-port3519185164/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759143476225362184" to /tmp/TestFunctionalparallelMountCmdany-port3519185164/001/test-1759143476225362184
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-452443 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (301.57361ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 10:57:56.527462  807725 retry.go:31] will retry after 301.398385ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 29 10:57 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 29 10:57 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 29 10:57 test-1759143476225362184
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 ssh cat /mount-9p/test-1759143476225362184
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-452443 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [b0dda451-4a65-418d-9be0-ddf1405d6616] Pending
helpers_test.go:352: "busybox-mount" [b0dda451-4a65-418d-9be0-ddf1405d6616] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [b0dda451-4a65-418d-9be0-ddf1405d6616] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [b0dda451-4a65-418d-9be0-ddf1405d6616] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003352108s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-452443 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-452443 /tmp/TestFunctionalparallelMountCmdany-port3519185164/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31351
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31351
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-452443 /tmp/TestFunctionalparallelMountCmdspecific-port4253104569/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-452443 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (280.493308ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 10:58:04.115482  807725 retry.go:31] will retry after 446.893421ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-452443 /tmp/TestFunctionalparallelMountCmdspecific-port4253104569/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-452443 ssh "sudo umount -f /mount-9p": exit status 1 (270.546741ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-452443 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-452443 /tmp/TestFunctionalparallelMountCmdspecific-port4253104569/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-452443 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3816582917/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-452443 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3816582917/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-452443 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3816582917/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-452443 ssh "findmnt -T" /mount1: exit status 1 (325.272748ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 10:58:05.905916  807725 retry.go:31] will retry after 699.132355ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-452443 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-452443 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3816582917/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-452443 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3816582917/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-452443 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3816582917/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-452443 docker-env) && out/minikube-linux-amd64 status -p functional-452443"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-452443 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-452443 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-452443
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-452443
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-452443 image ls --format short --alsologtostderr:
I0929 11:03:01.800837  882371 out.go:360] Setting OutFile to fd 1 ...
I0929 11:03:01.800983  882371 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:03:01.800997  882371 out.go:374] Setting ErrFile to fd 2...
I0929 11:03:01.801004  882371 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:03:01.801273  882371 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-804126/.minikube/bin
I0929 11:03:01.802000  882371 config.go:182] Loaded profile config "functional-452443": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 11:03:01.802105  882371 config.go:182] Loaded profile config "functional-452443": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 11:03:01.802551  882371 cli_runner.go:164] Run: docker container inspect functional-452443 --format={{.State.Status}}
I0929 11:03:01.823175  882371 ssh_runner.go:195] Run: systemctl --version
I0929 11:03:01.823228  882371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-452443
I0929 11:03:01.842481  882371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33529 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/functional-452443/id_rsa Username:docker}
I0929 11:03:01.936856  882371 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-452443 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ localhost/my-image                          │ functional-452443 │ faabf9156dcf3 │ 1.24MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.0           │ a0af72f2ec6d6 │ 74.9MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.0           │ 46169d968e920 │ 52.8MB │
│ docker.io/kicbase/echo-server               │ functional-452443 │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kicbase/echo-server               │ latest            │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ docker.io/library/minikube-local-cache-test │ functional-452443 │ 6ad040fac6744 │ 30B    │
│ registry.k8s.io/etcd                        │ 3.6.4-0           │ 5f1f5298c888d │ 195MB  │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 52546a367cc9e │ 75MB   │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ registry.k8s.io/kube-apiserver              │ v1.34.0           │ 90550c43ad2bc │ 88MB   │
│ registry.k8s.io/kube-proxy                  │ v1.34.0           │ df0860106674d │ 71.9MB │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc      │ 56cc512116c8f │ 4.4MB  │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-452443 image ls --format table --alsologtostderr:
I0929 11:03:05.551854  882878 out.go:360] Setting OutFile to fd 1 ...
I0929 11:03:05.552112  882878 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:03:05.552123  882878 out.go:374] Setting ErrFile to fd 2...
I0929 11:03:05.552128  882878 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:03:05.552341  882878 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-804126/.minikube/bin
I0929 11:03:05.552968  882878 config.go:182] Loaded profile config "functional-452443": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 11:03:05.553072  882878 config.go:182] Loaded profile config "functional-452443": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 11:03:05.553473  882878 cli_runner.go:164] Run: docker container inspect functional-452443 --format={{.State.Status}}
I0929 11:03:05.572547  882878 ssh_runner.go:195] Run: systemctl --version
I0929 11:03:05.572617  882878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-452443
I0929 11:03:05.591784  882878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33529 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/functional-452443/id_rsa Username:docker}
I0929 11:03:05.684605  882878 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-452443 image ls --format json --alsologtostderr:
[{"id":"6ad040fac674416ef01ee6882c90cdd19f2a535c31d7951735fed09fd7133f87","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-452443"],"size":"30"},{"id":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"88000000"},{"id":"df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"71900000"},{"id":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"74900000"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-452443","docker.io/kicbase/echo-server:late
st"],"size":"4940000"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"75000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"52800000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c
1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"faabf9156dcf3bb66e06beae73ed1c2edfd03c2d3287e7fcb534144c99a696b6","repoDigests":[],"repoTags":["localhost/my-image:functional-452443"],"size":"1240000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-452443 image ls --format json --alsologtostderr:
I0929 11:03:05.337255  882829 out.go:360] Setting OutFile to fd 1 ...
I0929 11:03:05.337393  882829 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:03:05.337408  882829 out.go:374] Setting ErrFile to fd 2...
I0929 11:03:05.337431  882829 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:03:05.337649  882829 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-804126/.minikube/bin
I0929 11:03:05.338339  882829 config.go:182] Loaded profile config "functional-452443": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 11:03:05.338497  882829 config.go:182] Loaded profile config "functional-452443": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 11:03:05.339088  882829 cli_runner.go:164] Run: docker container inspect functional-452443 --format={{.State.Status}}
I0929 11:03:05.358226  882829 ssh_runner.go:195] Run: systemctl --version
I0929 11:03:05.358295  882829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-452443
I0929 11:03:05.377234  882829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33529 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/functional-452443/id_rsa Username:docker}
I0929 11:03:05.471780  882829 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-452443 image ls --format yaml --alsologtostderr:
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "88000000"
- id: df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "71900000"
- id: a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "74900000"
- id: 46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "52800000"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195000000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "75000000"
- id: 6ad040fac674416ef01ee6882c90cdd19f2a535c31d7951735fed09fd7133f87
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-452443
size: "30"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-452443
- docker.io/kicbase/echo-server:latest
size: "4940000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-452443 image ls --format yaml --alsologtostderr:
I0929 11:03:02.016572  882419 out.go:360] Setting OutFile to fd 1 ...
I0929 11:03:02.016694  882419 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:03:02.016706  882419 out.go:374] Setting ErrFile to fd 2...
I0929 11:03:02.016714  882419 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:03:02.016947  882419 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-804126/.minikube/bin
I0929 11:03:02.017758  882419 config.go:182] Loaded profile config "functional-452443": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 11:03:02.017872  882419 config.go:182] Loaded profile config "functional-452443": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 11:03:02.018281  882419 cli_runner.go:164] Run: docker container inspect functional-452443 --format={{.State.Status}}
I0929 11:03:02.037629  882419 ssh_runner.go:195] Run: systemctl --version
I0929 11:03:02.037688  882419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-452443
I0929 11:03:02.056306  882419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33529 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/functional-452443/id_rsa Username:docker}
I0929 11:03:02.150732  882419 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-452443 ssh pgrep buildkitd: exit status 1 (263.894252ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 image build -t localhost/my-image:functional-452443 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-452443 image build -t localhost/my-image:functional-452443 testdata/build --alsologtostderr: (2.63297992s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-452443 image build -t localhost/my-image:functional-452443 testdata/build --alsologtostderr:
I0929 11:03:02.493226  882569 out.go:360] Setting OutFile to fd 1 ...
I0929 11:03:02.493539  882569 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:03:02.493551  882569 out.go:374] Setting ErrFile to fd 2...
I0929 11:03:02.493555  882569 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:03:02.493740  882569 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-804126/.minikube/bin
I0929 11:03:02.494374  882569 config.go:182] Loaded profile config "functional-452443": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 11:03:02.495223  882569 config.go:182] Loaded profile config "functional-452443": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 11:03:02.495704  882569 cli_runner.go:164] Run: docker container inspect functional-452443 --format={{.State.Status}}
I0929 11:03:02.514796  882569 ssh_runner.go:195] Run: systemctl --version
I0929 11:03:02.514857  882569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-452443
I0929 11:03:02.534946  882569 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33529 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/functional-452443/id_rsa Username:docker}
I0929 11:03:02.628827  882569 build_images.go:161] Building image from path: /tmp/build.4083795063.tar
I0929 11:03:02.628910  882569 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0929 11:03:02.639775  882569 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4083795063.tar
I0929 11:03:02.643789  882569 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4083795063.tar: stat -c "%s %y" /var/lib/minikube/build/build.4083795063.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4083795063.tar': No such file or directory
I0929 11:03:02.643823  882569 ssh_runner.go:362] scp /tmp/build.4083795063.tar --> /var/lib/minikube/build/build.4083795063.tar (3072 bytes)
I0929 11:03:02.671701  882569 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4083795063
I0929 11:03:02.681637  882569 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4083795063 -xf /var/lib/minikube/build/build.4083795063.tar
I0929 11:03:02.692040  882569 docker.go:361] Building image: /var/lib/minikube/build/build.4083795063
I0929 11:03:02.692132  882569 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-452443 /var/lib/minikube/build/build.4083795063
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:faabf9156dcf3bb66e06beae73ed1c2edfd03c2d3287e7fcb534144c99a696b6 done
#8 naming to localhost/my-image:functional-452443 done
#8 DONE 0.0s
I0929 11:03:05.050711  882569 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-452443 /var/lib/minikube/build/build.4083795063: (2.358538345s)
I0929 11:03:05.050801  882569 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4083795063
I0929 11:03:05.061263  882569 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4083795063.tar
I0929 11:03:05.070942  882569 build_images.go:217] Built localhost/my-image:functional-452443 from /tmp/build.4083795063.tar
I0929 11:03:05.070976  882569 build_images.go:133] succeeded building to: functional-452443
I0929 11:03:05.070981  882569 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.696011893s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-452443
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 image load --daemon kicbase/echo-server:functional-452443 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 image load --daemon kicbase/echo-server:functional-452443 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-452443
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 image load --daemon kicbase/echo-server:functional-452443 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 image save kicbase/echo-server:functional-452443 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 image rm kicbase/echo-server:functional-452443 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-452443
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 image save --daemon kicbase/echo-server:functional-452443 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-452443
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-452443 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-452443 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
E0929 11:03:38.341963  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-452443
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-452443
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-452443
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (100.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E0929 11:08:38.343162  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-744338 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (1m39.396462972s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (100.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (54.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-744338 kubectl -- rollout status deployment/busybox: (4.483260031s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0929 11:10:07.142268  807725 retry.go:31] will retry after 1.210688601s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0929 11:10:08.477121  807725 retry.go:31] will retry after 2.189309726s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0929 11:10:10.791878  807725 retry.go:31] will retry after 3.352818276s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0929 11:10:14.269550  807725 retry.go:31] will retry after 2.676664992s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0929 11:10:17.073646  807725 retry.go:31] will retry after 4.019337797s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0929 11:10:21.211074  807725 retry.go:31] will retry after 7.519620517s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0929 11:10:28.849248  807725 retry.go:31] will retry after 7.657323225s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0929 11:10:36.632363  807725 retry.go:31] will retry after 18.669960578s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 kubectl -- exec busybox-7b57f96db7-rdpfp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 kubectl -- exec busybox-7b57f96db7-rjk88 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 kubectl -- exec busybox-7b57f96db7-snbtx -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 kubectl -- exec busybox-7b57f96db7-rdpfp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 kubectl -- exec busybox-7b57f96db7-rjk88 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 kubectl -- exec busybox-7b57f96db7-snbtx -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 kubectl -- exec busybox-7b57f96db7-rdpfp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 kubectl -- exec busybox-7b57f96db7-rjk88 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 kubectl -- exec busybox-7b57f96db7-snbtx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (54.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 kubectl -- exec busybox-7b57f96db7-rdpfp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 kubectl -- exec busybox-7b57f96db7-rdpfp -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 kubectl -- exec busybox-7b57f96db7-rjk88 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 kubectl -- exec busybox-7b57f96db7-rjk88 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 kubectl -- exec busybox-7b57f96db7-snbtx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 kubectl -- exec busybox-7b57f96db7-snbtx -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (14.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-744338 node add --alsologtostderr -v 5: (13.361331291s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (14.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-744338 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 cp testdata/cp-test.txt ha-744338:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 ssh -n ha-744338 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 cp ha-744338:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile894932007/001/cp-test_ha-744338.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 ssh -n ha-744338 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 cp ha-744338:/home/docker/cp-test.txt ha-744338-m02:/home/docker/cp-test_ha-744338_ha-744338-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 ssh -n ha-744338 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 ssh -n ha-744338-m02 "sudo cat /home/docker/cp-test_ha-744338_ha-744338-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 cp ha-744338:/home/docker/cp-test.txt ha-744338-m03:/home/docker/cp-test_ha-744338_ha-744338-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 ssh -n ha-744338 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 ssh -n ha-744338-m03 "sudo cat /home/docker/cp-test_ha-744338_ha-744338-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 cp ha-744338:/home/docker/cp-test.txt ha-744338-m04:/home/docker/cp-test_ha-744338_ha-744338-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 ssh -n ha-744338 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 ssh -n ha-744338-m04 "sudo cat /home/docker/cp-test_ha-744338_ha-744338-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 cp testdata/cp-test.txt ha-744338-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 ssh -n ha-744338-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 cp ha-744338-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile894932007/001/cp-test_ha-744338-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 ssh -n ha-744338-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 cp ha-744338-m02:/home/docker/cp-test.txt ha-744338:/home/docker/cp-test_ha-744338-m02_ha-744338.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 ssh -n ha-744338-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 ssh -n ha-744338 "sudo cat /home/docker/cp-test_ha-744338-m02_ha-744338.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 cp ha-744338-m02:/home/docker/cp-test.txt ha-744338-m03:/home/docker/cp-test_ha-744338-m02_ha-744338-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 ssh -n ha-744338-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 ssh -n ha-744338-m03 "sudo cat /home/docker/cp-test_ha-744338-m02_ha-744338-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 cp ha-744338-m02:/home/docker/cp-test.txt ha-744338-m04:/home/docker/cp-test_ha-744338-m02_ha-744338-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 ssh -n ha-744338-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 ssh -n ha-744338-m04 "sudo cat /home/docker/cp-test_ha-744338-m02_ha-744338-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 cp testdata/cp-test.txt ha-744338-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 ssh -n ha-744338-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 cp ha-744338-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile894932007/001/cp-test_ha-744338-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 ssh -n ha-744338-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 cp ha-744338-m03:/home/docker/cp-test.txt ha-744338:/home/docker/cp-test_ha-744338-m03_ha-744338.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 ssh -n ha-744338-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 ssh -n ha-744338 "sudo cat /home/docker/cp-test_ha-744338-m03_ha-744338.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 cp ha-744338-m03:/home/docker/cp-test.txt ha-744338-m02:/home/docker/cp-test_ha-744338-m03_ha-744338-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 ssh -n ha-744338-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 ssh -n ha-744338-m02 "sudo cat /home/docker/cp-test_ha-744338-m03_ha-744338-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 cp ha-744338-m03:/home/docker/cp-test.txt ha-744338-m04:/home/docker/cp-test_ha-744338-m03_ha-744338-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 ssh -n ha-744338-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 ssh -n ha-744338-m04 "sudo cat /home/docker/cp-test_ha-744338-m03_ha-744338-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 cp testdata/cp-test.txt ha-744338-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 ssh -n ha-744338-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 cp ha-744338-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile894932007/001/cp-test_ha-744338-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 ssh -n ha-744338-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 cp ha-744338-m04:/home/docker/cp-test.txt ha-744338:/home/docker/cp-test_ha-744338-m04_ha-744338.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 ssh -n ha-744338-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 ssh -n ha-744338 "sudo cat /home/docker/cp-test_ha-744338-m04_ha-744338.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 cp ha-744338-m04:/home/docker/cp-test.txt ha-744338-m02:/home/docker/cp-test_ha-744338-m04_ha-744338-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 ssh -n ha-744338-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 ssh -n ha-744338-m02 "sudo cat /home/docker/cp-test_ha-744338-m04_ha-744338-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 cp ha-744338-m04:/home/docker/cp-test.txt ha-744338-m03:/home/docker/cp-test_ha-744338-m04_ha-744338-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 ssh -n ha-744338-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 ssh -n ha-744338-m03 "sudo cat /home/docker/cp-test_ha-744338-m04_ha-744338-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-744338 node stop m02 --alsologtostderr -v 5: (10.739526051s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-744338 status --alsologtostderr -v 5: exit status 7 (677.46488ms)

                                                
                                                
-- stdout --
	ha-744338
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-744338-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-744338-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-744338-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:11:40.968089  913887 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:11:40.968382  913887 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:11:40.968393  913887 out.go:374] Setting ErrFile to fd 2...
	I0929 11:11:40.968398  913887 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:11:40.968619  913887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-804126/.minikube/bin
	I0929 11:11:40.968801  913887 out.go:368] Setting JSON to false
	I0929 11:11:40.968831  913887 mustload.go:65] Loading cluster: ha-744338
	I0929 11:11:40.968895  913887 notify.go:220] Checking for updates...
	I0929 11:11:40.969335  913887 config.go:182] Loaded profile config "ha-744338": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 11:11:40.969365  913887 status.go:174] checking status of ha-744338 ...
	I0929 11:11:40.969869  913887 cli_runner.go:164] Run: docker container inspect ha-744338 --format={{.State.Status}}
	I0929 11:11:40.989369  913887 status.go:371] ha-744338 host status = "Running" (err=<nil>)
	I0929 11:11:40.989393  913887 host.go:66] Checking if "ha-744338" exists ...
	I0929 11:11:40.989761  913887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-744338
	I0929 11:11:41.008129  913887 host.go:66] Checking if "ha-744338" exists ...
	I0929 11:11:41.008537  913887 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:11:41.008616  913887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-744338
	I0929 11:11:41.029574  913887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/ha-744338/id_rsa Username:docker}
	I0929 11:11:41.122730  913887 ssh_runner.go:195] Run: systemctl --version
	I0929 11:11:41.127200  913887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:11:41.138676  913887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:11:41.193224  913887 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 11:11:41.183279598 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 11:11:41.193829  913887 kubeconfig.go:125] found "ha-744338" server: "https://192.168.49.254:8443"
	I0929 11:11:41.193871  913887 api_server.go:166] Checking apiserver status ...
	I0929 11:11:41.193911  913887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:11:41.206306  913887 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2300/cgroup
	W0929 11:11:41.216283  913887 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2300/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0929 11:11:41.216343  913887 ssh_runner.go:195] Run: ls
	I0929 11:11:41.219914  913887 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0929 11:11:41.224891  913887 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0929 11:11:41.224913  913887 status.go:463] ha-744338 apiserver status = Running (err=<nil>)
	I0929 11:11:41.224927  913887 status.go:176] ha-744338 status: &{Name:ha-744338 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:11:41.224945  913887 status.go:174] checking status of ha-744338-m02 ...
	I0929 11:11:41.225170  913887 cli_runner.go:164] Run: docker container inspect ha-744338-m02 --format={{.State.Status}}
	I0929 11:11:41.246004  913887 status.go:371] ha-744338-m02 host status = "Stopped" (err=<nil>)
	I0929 11:11:41.246025  913887 status.go:384] host is not running, skipping remaining checks
	I0929 11:11:41.246032  913887 status.go:176] ha-744338-m02 status: &{Name:ha-744338-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:11:41.246063  913887 status.go:174] checking status of ha-744338-m03 ...
	I0929 11:11:41.246379  913887 cli_runner.go:164] Run: docker container inspect ha-744338-m03 --format={{.State.Status}}
	I0929 11:11:41.264641  913887 status.go:371] ha-744338-m03 host status = "Running" (err=<nil>)
	I0929 11:11:41.264669  913887 host.go:66] Checking if "ha-744338-m03" exists ...
	I0929 11:11:41.264928  913887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-744338-m03
	I0929 11:11:41.283236  913887 host.go:66] Checking if "ha-744338-m03" exists ...
	I0929 11:11:41.283604  913887 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:11:41.283652  913887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-744338-m03
	I0929 11:11:41.301644  913887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33544 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/ha-744338-m03/id_rsa Username:docker}
	I0929 11:11:41.395784  913887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:11:41.407802  913887 kubeconfig.go:125] found "ha-744338" server: "https://192.168.49.254:8443"
	I0929 11:11:41.407831  913887 api_server.go:166] Checking apiserver status ...
	I0929 11:11:41.407876  913887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:11:41.419587  913887 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2223/cgroup
	W0929 11:11:41.429210  913887 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2223/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0929 11:11:41.429269  913887 ssh_runner.go:195] Run: ls
	I0929 11:11:41.432846  913887 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0929 11:11:41.437006  913887 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0929 11:11:41.437027  913887 status.go:463] ha-744338-m03 apiserver status = Running (err=<nil>)
	I0929 11:11:41.437037  913887 status.go:176] ha-744338-m03 status: &{Name:ha-744338-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:11:41.437055  913887 status.go:174] checking status of ha-744338-m04 ...
	I0929 11:11:41.437313  913887 cli_runner.go:164] Run: docker container inspect ha-744338-m04 --format={{.State.Status}}
	I0929 11:11:41.456067  913887 status.go:371] ha-744338-m04 host status = "Running" (err=<nil>)
	I0929 11:11:41.456090  913887 host.go:66] Checking if "ha-744338-m04" exists ...
	I0929 11:11:41.456336  913887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-744338-m04
	I0929 11:11:41.473329  913887 host.go:66] Checking if "ha-744338-m04" exists ...
	I0929 11:11:41.473608  913887 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:11:41.473659  913887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-744338-m04
	I0929 11:11:41.491859  913887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33549 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/ha-744338-m04/id_rsa Username:docker}
	I0929 11:11:41.584497  913887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:11:41.596307  913887 status.go:176] ha-744338-m04 status: &{Name:ha-744338-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (37.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-744338 node start m02 --alsologtostderr -v 5: (36.368432376s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (37.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (163.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 stop --alsologtostderr -v 5
E0929 11:12:44.075216  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/functional-452443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:12:44.081606  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/functional-452443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:12:44.092915  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/functional-452443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:12:44.114187  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/functional-452443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:12:44.155515  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/functional-452443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:12:44.236934  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/functional-452443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:12:44.398164  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/functional-452443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:12:44.720230  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/functional-452443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:12:45.362326  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/functional-452443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:12:46.643897  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/functional-452443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:12:49.205518  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/functional-452443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-744338 stop --alsologtostderr -v 5: (33.357683751s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 start --wait true --alsologtostderr -v 5
E0929 11:12:54.326852  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/functional-452443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:13:04.568247  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/functional-452443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:13:25.049811  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/functional-452443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:13:38.341740  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:14:06.012922  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/functional-452443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:15:01.405061  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-744338 start --wait true --alsologtostderr -v 5: (2m10.20444952s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (163.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-744338 node delete m03 --alsologtostderr -v 5: (8.359533046s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 stop --alsologtostderr -v 5
E0929 11:15:27.934288  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/functional-452443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-744338 stop --alsologtostderr -v 5: (32.111685422s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-744338 status --alsologtostderr -v 5: exit status 7 (106.570945ms)

                                                
                                                
-- stdout --
	ha-744338
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-744338-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-744338-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:15:46.184976  944302 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:15:46.185085  944302 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:15:46.185090  944302 out.go:374] Setting ErrFile to fd 2...
	I0929 11:15:46.185094  944302 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:15:46.185268  944302 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-804126/.minikube/bin
	I0929 11:15:46.185445  944302 out.go:368] Setting JSON to false
	I0929 11:15:46.185476  944302 mustload.go:65] Loading cluster: ha-744338
	I0929 11:15:46.185580  944302 notify.go:220] Checking for updates...
	I0929 11:15:46.185803  944302 config.go:182] Loaded profile config "ha-744338": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 11:15:46.185823  944302 status.go:174] checking status of ha-744338 ...
	I0929 11:15:46.186941  944302 cli_runner.go:164] Run: docker container inspect ha-744338 --format={{.State.Status}}
	I0929 11:15:46.206214  944302 status.go:371] ha-744338 host status = "Stopped" (err=<nil>)
	I0929 11:15:46.206255  944302 status.go:384] host is not running, skipping remaining checks
	I0929 11:15:46.206264  944302 status.go:176] ha-744338 status: &{Name:ha-744338 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:15:46.206301  944302 status.go:174] checking status of ha-744338-m02 ...
	I0929 11:15:46.206673  944302 cli_runner.go:164] Run: docker container inspect ha-744338-m02 --format={{.State.Status}}
	I0929 11:15:46.224100  944302 status.go:371] ha-744338-m02 host status = "Stopped" (err=<nil>)
	I0929 11:15:46.224120  944302 status.go:384] host is not running, skipping remaining checks
	I0929 11:15:46.224128  944302 status.go:176] ha-744338-m02 status: &{Name:ha-744338-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:15:46.224149  944302 status.go:174] checking status of ha-744338-m04 ...
	I0929 11:15:46.224368  944302 cli_runner.go:164] Run: docker container inspect ha-744338-m04 --format={{.State.Status}}
	I0929 11:15:46.242005  944302 status.go:371] ha-744338-m04 host status = "Stopped" (err=<nil>)
	I0929 11:15:46.242025  944302 status.go:384] host is not running, skipping remaining checks
	I0929 11:15:46.242033  944302 status.go:176] ha-744338-m04 status: &{Name:ha-744338-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (101.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-744338 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (1m40.89622503s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (101.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (25.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 node add --control-plane --alsologtostderr -v 5
E0929 11:17:44.077633  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/functional-452443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-744338 node add --control-plane --alsologtostderr -v 5: (24.873100172s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-744338 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (25.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.99s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (24.16s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-126726 --driver=docker  --container-runtime=docker
E0929 11:18:11.777589  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/functional-452443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-126726 --driver=docker  --container-runtime=docker: (24.158311297s)
--- PASS: TestImageBuild/serial/Setup (24.16s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.16s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-126726
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-126726: (1.159796825s)
--- PASS: TestImageBuild/serial/NormalBuild (1.16s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.71s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-126726
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.71s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.51s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-126726
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.51s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.53s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-126726
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (66.13s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-895511 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker
E0929 11:18:38.343647  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-895511 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m6.125547828s)
--- PASS: TestJSONOutput/start/Command (66.13s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.49s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-895511 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.49s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.45s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-895511 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.45s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.8s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-895511 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-895511 --output=json --user=testUser: (10.798270464s)
--- PASS: TestJSONOutput/stop/Command (10.80s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-700337 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-700337 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (73.9656ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"33c039eb-dcb7-4d4f-9baa-01afec24885c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-700337] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b8d50ac9-40f9-4d4f-be42-44465df946ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21656"}}
	{"specversion":"1.0","id":"a71c97af-af91-4e74-9b1b-9e0bb9e98d8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5dc33eab-f1ab-43c2-a9c8-b028bfb23815","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21656-804126/kubeconfig"}}
	{"specversion":"1.0","id":"d657a0a3-a69f-4d1f-ac00-5162474fb6a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-804126/.minikube"}}
	{"specversion":"1.0","id":"eb5a9c86-cd35-43ad-b5a9-e0551205eada","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e8bbae46-8cfb-4b8c-a65f-845dec647d9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2a456319-4494-4481-bd4f-f71d201fda11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-700337" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-700337
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (25.25s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-886593 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-886593 --network=: (23.090451939s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-886593" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-886593
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-886593: (2.142650729s)
--- PASS: TestKicCustomNetwork/create_custom_network (25.25s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.98s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-519325 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-519325 --network=bridge: (22.009626327s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-519325" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-519325
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-519325: (1.950506313s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.98s)

                                                
                                    
x
+
TestKicExistingNetwork (24.67s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0929 11:20:42.064207  807725 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0929 11:20:42.082112  807725 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0929 11:20:42.082203  807725 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0929 11:20:42.082222  807725 cli_runner.go:164] Run: docker network inspect existing-network
W0929 11:20:42.099888  807725 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0929 11:20:42.099928  807725 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0929 11:20:42.099947  807725 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0929 11:20:42.100130  807725 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0929 11:20:42.118154  807725 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4e6c4ce3f3c8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:36:08:4c:37:84:13} reservation:<nil>}
I0929 11:20:42.118687  807725 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00167ae00}
I0929 11:20:42.118724  807725 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0929 11:20:42.118777  807725 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0929 11:20:42.174046  807725 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-070527 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-070527 --network=existing-network: (22.565573194s)
helpers_test.go:175: Cleaning up "existing-network-070527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-070527
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-070527: (1.953456893s)
I0929 11:21:06.711320  807725 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.67s)

                                                
                                    
x
+
TestKicCustomSubnet (25.28s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-398215 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-398215 --subnet=192.168.60.0/24: (23.091240326s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-398215 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-398215" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-398215
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-398215: (2.168123559s)
--- PASS: TestKicCustomSubnet (25.28s)

                                                
                                    
x
+
TestKicStaticIP (25.51s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-640901 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-640901 --static-ip=192.168.200.200: (23.267499152s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-640901 ip
helpers_test.go:175: Cleaning up "static-ip-640901" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-640901
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-640901: (2.100774288s)
--- PASS: TestKicStaticIP (25.51s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (51.24s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-551176 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-551176 --driver=docker  --container-runtime=docker: (22.61354915s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-564248 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-564248 --driver=docker  --container-runtime=docker: (23.049289078s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-551176
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-564248
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
E0929 11:22:44.075669  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/functional-452443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:175: Cleaning up "second-564248" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-564248
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-564248: (2.160547738s)
helpers_test.go:175: Cleaning up "first-551176" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-551176
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-551176: (2.180562932s)
--- PASS: TestMinikubeProfile (51.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.23s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-505618 --memory=3072 --mount-string /tmp/TestMountStartserial4210176166/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-505618 --memory=3072 --mount-string /tmp/TestMountStartserial4210176166/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.226888521s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-505618 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.73s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-524841 --memory=3072 --mount-string /tmp/TestMountStartserial4210176166/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-524841 --memory=3072 --mount-string /tmp/TestMountStartserial4210176166/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.72837785s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-524841 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.53s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-505618 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-505618 --alsologtostderr -v=5: (1.534125917s)
--- PASS: TestMountStart/serial/DeleteFirst (1.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-524841 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-524841
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-524841: (1.191883489s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (9.4s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-524841
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-524841: (8.397575148s)
--- PASS: TestMountStart/serial/RestartStopped (9.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-524841 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (57.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-495083 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
E0929 11:23:38.342660  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-495083 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (56.758864739s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (57.25s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (44.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-495083 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-495083 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-495083 -- rollout status deployment/busybox: (3.251546211s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-495083 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0929 11:24:21.403651  807725 retry.go:31] will retry after 540.056503ms: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-495083 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0929 11:24:22.065220  807725 retry.go:31] will retry after 2.026665578s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-495083 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0929 11:24:24.212576  807725 retry.go:31] will retry after 3.175961479s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-495083 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0929 11:24:27.512298  807725 retry.go:31] will retry after 3.599712273s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-495083 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0929 11:24:31.238291  807725 retry.go:31] will retry after 3.079445563s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-495083 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0929 11:24:34.441176  807725 retry.go:31] will retry after 4.579053538s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-495083 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0929 11:24:39.145056  807725 retry.go:31] will retry after 7.721780285s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-495083 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0929 11:24:46.988578  807725 retry.go:31] will retry after 13.714961813s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-495083 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-495083 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-495083 -- exec busybox-7b57f96db7-2drxg -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-495083 -- exec busybox-7b57f96db7-6djjn -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-495083 -- exec busybox-7b57f96db7-2drxg -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-495083 -- exec busybox-7b57f96db7-6djjn -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-495083 -- exec busybox-7b57f96db7-2drxg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-495083 -- exec busybox-7b57f96db7-6djjn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (44.26s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-495083 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-495083 -- exec busybox-7b57f96db7-2drxg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-495083 -- exec busybox-7b57f96db7-2drxg -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-495083 -- exec busybox-7b57f96db7-6djjn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-495083 -- exec busybox-7b57f96db7-6djjn -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (14.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-495083 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-495083 -v=5 --alsologtostderr: (13.374117709s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (14.03s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-495083 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 cp testdata/cp-test.txt multinode-495083:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 ssh -n multinode-495083 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 cp multinode-495083:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1641694992/001/cp-test_multinode-495083.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 ssh -n multinode-495083 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 cp multinode-495083:/home/docker/cp-test.txt multinode-495083-m02:/home/docker/cp-test_multinode-495083_multinode-495083-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 ssh -n multinode-495083 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 ssh -n multinode-495083-m02 "sudo cat /home/docker/cp-test_multinode-495083_multinode-495083-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 cp multinode-495083:/home/docker/cp-test.txt multinode-495083-m03:/home/docker/cp-test_multinode-495083_multinode-495083-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 ssh -n multinode-495083 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 ssh -n multinode-495083-m03 "sudo cat /home/docker/cp-test_multinode-495083_multinode-495083-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 cp testdata/cp-test.txt multinode-495083-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 ssh -n multinode-495083-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 cp multinode-495083-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1641694992/001/cp-test_multinode-495083-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 ssh -n multinode-495083-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 cp multinode-495083-m02:/home/docker/cp-test.txt multinode-495083:/home/docker/cp-test_multinode-495083-m02_multinode-495083.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 ssh -n multinode-495083-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 ssh -n multinode-495083 "sudo cat /home/docker/cp-test_multinode-495083-m02_multinode-495083.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 cp multinode-495083-m02:/home/docker/cp-test.txt multinode-495083-m03:/home/docker/cp-test_multinode-495083-m02_multinode-495083-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 ssh -n multinode-495083-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 ssh -n multinode-495083-m03 "sudo cat /home/docker/cp-test_multinode-495083-m02_multinode-495083-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 cp testdata/cp-test.txt multinode-495083-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 ssh -n multinode-495083-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 cp multinode-495083-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1641694992/001/cp-test_multinode-495083-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 ssh -n multinode-495083-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 cp multinode-495083-m03:/home/docker/cp-test.txt multinode-495083:/home/docker/cp-test_multinode-495083-m03_multinode-495083.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 ssh -n multinode-495083-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 ssh -n multinode-495083 "sudo cat /home/docker/cp-test_multinode-495083-m03_multinode-495083.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 cp multinode-495083-m03:/home/docker/cp-test.txt multinode-495083-m02:/home/docker/cp-test_multinode-495083-m03_multinode-495083-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 ssh -n multinode-495083-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 ssh -n multinode-495083-m02 "sudo cat /home/docker/cp-test_multinode-495083-m03_multinode-495083-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.20s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-495083 node stop m03: (1.221516417s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-495083 status: exit status 7 (505.081837ms)

                                                
                                                
-- stdout --
	multinode-495083
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-495083-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-495083-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-495083 status --alsologtostderr: exit status 7 (503.709609ms)

                                                
                                                
-- stdout --
	multinode-495083
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-495083-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-495083-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:25:29.729637 1028511 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:25:29.730077 1028511 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:25:29.730086 1028511 out.go:374] Setting ErrFile to fd 2...
	I0929 11:25:29.730090 1028511 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:25:29.730329 1028511 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-804126/.minikube/bin
	I0929 11:25:29.730551 1028511 out.go:368] Setting JSON to false
	I0929 11:25:29.730584 1028511 mustload.go:65] Loading cluster: multinode-495083
	I0929 11:25:29.730649 1028511 notify.go:220] Checking for updates...
	I0929 11:25:29.731021 1028511 config.go:182] Loaded profile config "multinode-495083": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 11:25:29.731047 1028511 status.go:174] checking status of multinode-495083 ...
	I0929 11:25:29.731750 1028511 cli_runner.go:164] Run: docker container inspect multinode-495083 --format={{.State.Status}}
	I0929 11:25:29.753033 1028511 status.go:371] multinode-495083 host status = "Running" (err=<nil>)
	I0929 11:25:29.753087 1028511 host.go:66] Checking if "multinode-495083" exists ...
	I0929 11:25:29.753509 1028511 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-495083
	I0929 11:25:29.773116 1028511 host.go:66] Checking if "multinode-495083" exists ...
	I0929 11:25:29.773410 1028511 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:25:29.773474 1028511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-495083
	I0929 11:25:29.793279 1028511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33661 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/multinode-495083/id_rsa Username:docker}
	I0929 11:25:29.887510 1028511 ssh_runner.go:195] Run: systemctl --version
	I0929 11:25:29.892250 1028511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:25:29.905023 1028511 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:25:29.961608 1028511 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-29 11:25:29.950869723 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 11:25:29.962207 1028511 kubeconfig.go:125] found "multinode-495083" server: "https://192.168.67.2:8443"
	I0929 11:25:29.962248 1028511 api_server.go:166] Checking apiserver status ...
	I0929 11:25:29.962300 1028511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:25:29.975800 1028511 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2206/cgroup
	W0929 11:25:29.986480 1028511 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2206/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0929 11:25:29.986542 1028511 ssh_runner.go:195] Run: ls
	I0929 11:25:29.990611 1028511 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0929 11:25:29.995670 1028511 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0929 11:25:29.995699 1028511 status.go:463] multinode-495083 apiserver status = Running (err=<nil>)
	I0929 11:25:29.995710 1028511 status.go:176] multinode-495083 status: &{Name:multinode-495083 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:25:29.995726 1028511 status.go:174] checking status of multinode-495083-m02 ...
	I0929 11:25:29.996038 1028511 cli_runner.go:164] Run: docker container inspect multinode-495083-m02 --format={{.State.Status}}
	I0929 11:25:30.015630 1028511 status.go:371] multinode-495083-m02 host status = "Running" (err=<nil>)
	I0929 11:25:30.015658 1028511 host.go:66] Checking if "multinode-495083-m02" exists ...
	I0929 11:25:30.015966 1028511 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-495083-m02
	I0929 11:25:30.034858 1028511 host.go:66] Checking if "multinode-495083-m02" exists ...
	I0929 11:25:30.035237 1028511 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:25:30.035309 1028511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-495083-m02
	I0929 11:25:30.054413 1028511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33666 SSHKeyPath:/home/jenkins/minikube-integration/21656-804126/.minikube/machines/multinode-495083-m02/id_rsa Username:docker}
	I0929 11:25:30.148218 1028511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:25:30.160934 1028511 status.go:176] multinode-495083-m02 status: &{Name:multinode-495083-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:25:30.160977 1028511 status.go:174] checking status of multinode-495083-m03 ...
	I0929 11:25:30.161256 1028511 cli_runner.go:164] Run: docker container inspect multinode-495083-m03 --format={{.State.Status}}
	I0929 11:25:30.180055 1028511 status.go:371] multinode-495083-m03 host status = "Stopped" (err=<nil>)
	I0929 11:25:30.180077 1028511 status.go:384] host is not running, skipping remaining checks
	I0929 11:25:30.180083 1028511 status.go:176] multinode-495083-m03 status: &{Name:multinode-495083-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.23s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-495083 node start m03 -v=5 --alsologtostderr: (8.144874687s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.83s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (70.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-495083
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-495083
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-495083: (22.657161459s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-495083 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-495083 --wait=true -v=5 --alsologtostderr: (47.337123719s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-495083
--- PASS: TestMultiNode/serial/RestartKeepsNodes (70.10s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-495083 node delete m03: (4.692018929s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.30s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-495083 stop: (21.438217936s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-495083 status: exit status 7 (96.787804ms)

                                                
                                                
-- stdout --
	multinode-495083
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-495083-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-495083 status --alsologtostderr: exit status 7 (91.979839ms)

                                                
                                                
-- stdout --
	multinode-495083
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-495083-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:27:15.994289 1042808 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:27:15.994572 1042808 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:27:15.994581 1042808 out.go:374] Setting ErrFile to fd 2...
	I0929 11:27:15.994585 1042808 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:27:15.994804 1042808 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-804126/.minikube/bin
	I0929 11:27:15.994991 1042808 out.go:368] Setting JSON to false
	I0929 11:27:15.995024 1042808 mustload.go:65] Loading cluster: multinode-495083
	I0929 11:27:15.995067 1042808 notify.go:220] Checking for updates...
	I0929 11:27:15.995408 1042808 config.go:182] Loaded profile config "multinode-495083": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 11:27:15.995440 1042808 status.go:174] checking status of multinode-495083 ...
	I0929 11:27:15.995862 1042808 cli_runner.go:164] Run: docker container inspect multinode-495083 --format={{.State.Status}}
	I0929 11:27:16.016045 1042808 status.go:371] multinode-495083 host status = "Stopped" (err=<nil>)
	I0929 11:27:16.016080 1042808 status.go:384] host is not running, skipping remaining checks
	I0929 11:27:16.016087 1042808 status.go:176] multinode-495083 status: &{Name:multinode-495083 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:27:16.016132 1042808 status.go:174] checking status of multinode-495083-m02 ...
	I0929 11:27:16.016374 1042808 cli_runner.go:164] Run: docker container inspect multinode-495083-m02 --format={{.State.Status}}
	I0929 11:27:16.034637 1042808 status.go:371] multinode-495083-m02 host status = "Stopped" (err=<nil>)
	I0929 11:27:16.034663 1042808 status.go:384] host is not running, skipping remaining checks
	I0929 11:27:16.034670 1042808 status.go:176] multinode-495083-m02 status: &{Name:multinode-495083-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.63s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (46.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-495083 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
E0929 11:27:44.076643  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/functional-452443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-495083 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (45.793824954s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-495083 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (46.41s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-495083
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-495083-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-495083-m02 --driver=docker  --container-runtime=docker: exit status 14 (73.84874ms)

                                                
                                                
-- stdout --
	* [multinode-495083-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21656
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21656-804126/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-804126/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-495083-m02' is duplicated with machine name 'multinode-495083-m02' in profile 'multinode-495083'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-495083-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-495083-m03 --driver=docker  --container-runtime=docker: (23.803947882s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-495083
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-495083: exit status 80 (299.086337ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-495083 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-495083-m03 already exists in multinode-495083-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-495083-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-495083-m03: (2.137355985s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.37s)

                                                
                                    
x
+
TestPreload (141.92s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-053936 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0
E0929 11:28:38.341573  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:29:07.139583  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/functional-452443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-053936 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0: (1m12.557842444s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-053936 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-053936 image pull gcr.io/k8s-minikube/busybox: (2.073919861s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-053936
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-053936: (5.70058793s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-053936 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-053936 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (59.13725518s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-053936 image list
helpers_test.go:175: Cleaning up "test-preload-053936" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-053936
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-053936: (2.228921587s)
--- PASS: TestPreload (141.92s)

                                                
                                    
x
+
TestScheduledStopUnix (95.03s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-472899 --memory=3072 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-472899 --memory=3072 --driver=docker  --container-runtime=docker: (21.881382991s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-472899 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-472899 -n scheduled-stop-472899
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-472899 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0929 11:31:17.056983  807725 retry.go:31] will retry after 73.815µs: open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/scheduled-stop-472899/pid: no such file or directory
I0929 11:31:17.058154  807725 retry.go:31] will retry after 177.56µs: open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/scheduled-stop-472899/pid: no such file or directory
I0929 11:31:17.059301  807725 retry.go:31] will retry after 253.857µs: open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/scheduled-stop-472899/pid: no such file or directory
I0929 11:31:17.060444  807725 retry.go:31] will retry after 248.168µs: open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/scheduled-stop-472899/pid: no such file or directory
I0929 11:31:17.061570  807725 retry.go:31] will retry after 609.717µs: open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/scheduled-stop-472899/pid: no such file or directory
I0929 11:31:17.062698  807725 retry.go:31] will retry after 672.827µs: open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/scheduled-stop-472899/pid: no such file or directory
I0929 11:31:17.063819  807725 retry.go:31] will retry after 768.263µs: open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/scheduled-stop-472899/pid: no such file or directory
I0929 11:31:17.064979  807725 retry.go:31] will retry after 1.495677ms: open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/scheduled-stop-472899/pid: no such file or directory
I0929 11:31:17.067206  807725 retry.go:31] will retry after 1.861739ms: open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/scheduled-stop-472899/pid: no such file or directory
I0929 11:31:17.069460  807725 retry.go:31] will retry after 2.949869ms: open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/scheduled-stop-472899/pid: no such file or directory
I0929 11:31:17.072679  807725 retry.go:31] will retry after 5.67341ms: open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/scheduled-stop-472899/pid: no such file or directory
I0929 11:31:17.078904  807725 retry.go:31] will retry after 8.844336ms: open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/scheduled-stop-472899/pid: no such file or directory
I0929 11:31:17.088146  807725 retry.go:31] will retry after 8.780989ms: open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/scheduled-stop-472899/pid: no such file or directory
I0929 11:31:17.097445  807725 retry.go:31] will retry after 14.615461ms: open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/scheduled-stop-472899/pid: no such file or directory
I0929 11:31:17.112710  807725 retry.go:31] will retry after 26.744605ms: open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/scheduled-stop-472899/pid: no such file or directory
I0929 11:31:17.139994  807725 retry.go:31] will retry after 53.283274ms: open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/scheduled-stop-472899/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-472899 --cancel-scheduled
E0929 11:31:41.409305  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-472899 -n scheduled-stop-472899
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-472899
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-472899 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-472899
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-472899: exit status 7 (72.122396ms)

                                                
                                                
-- stdout --
	scheduled-stop-472899
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-472899 -n scheduled-stop-472899
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-472899 -n scheduled-stop-472899: exit status 7 (70.180756ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-472899" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-472899
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-472899: (1.659667652s)
--- PASS: TestScheduledStopUnix (95.03s)

                                                
                                    
x
+
TestSkaffold (83.54s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3260168710 version
skaffold_test.go:63: skaffold version: v2.16.1
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-243191 --memory=3072 --driver=docker  --container-runtime=docker
E0929 11:32:44.075578  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/functional-452443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-243191 --memory=3072 --driver=docker  --container-runtime=docker: (24.340026639s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3260168710 run --minikube-profile skaffold-243191 --kube-context skaffold-243191 --status-check=true --port-forward=false --interactive=false
E0929 11:33:38.341478  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3260168710 run --minikube-profile skaffold-243191 --kube-context skaffold-243191 --status-check=true --port-forward=false --interactive=false: (41.320196279s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:352: "leeroy-app-584f5b9f66-5b9xz" [cc5e5e42-90f8-45af-8310-bb6468e786c7] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.00310452s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:352: "leeroy-web-7d5bb8f6dc-7smvc" [f2582de0-2363-4180-84cd-aac7968c495e] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003762694s
helpers_test.go:175: Cleaning up "skaffold-243191" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-243191
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-243191: (3.370844289s)
--- PASS: TestSkaffold (83.54s)

                                                
                                    
x
+
TestInsufficientStorage (9.87s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-722054 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-722054 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (7.607501823s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1e5038c7-2f62-4f4f-9feb-5c7fde1effd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-722054] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f25c184f-b3b2-4258-a624-3550d1e43df8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21656"}}
	{"specversion":"1.0","id":"5e8e113d-84d9-4d7b-8806-34a95755afc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"48ea20dd-03c5-49b3-a12f-2d2cd5e03195","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21656-804126/kubeconfig"}}
	{"specversion":"1.0","id":"30899e54-9493-4a6e-a8e4-8e2f6b27bb06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-804126/.minikube"}}
	{"specversion":"1.0","id":"04d09508-c504-4866-bd17-d25ea6dcb233","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"93ce8b05-81d0-4a89-9eff-93ff1bfd450f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"550b79c2-e891-407a-ae63-7356197b4127","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"87da4d2a-09ed-4b82-8176-503b6c04f3af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"2716e5ad-6b90-465d-a7a5-cf35d74125cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"54ef609c-0b0b-48b0-94ff-5821f10555e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"bdbdb0a3-3208-46f5-9193-5e46dfe6fcd9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-722054\" primary control-plane node in \"insufficient-storage-722054\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6cb832d5-2f9b-4858-9862-0b24f72bee73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"243e078a-1d38-40c4-8553-635e83fc8c06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"7fdc55cd-56b1-4d87-9173-d5a6a7118bc8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-722054 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-722054 --output=json --layout=cluster: exit status 7 (282.964943ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-722054","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-722054","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0929 11:34:01.189179 1080992 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-722054" does not appear in /home/jenkins/minikube-integration/21656-804126/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-722054 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-722054 --output=json --layout=cluster: exit status 7 (284.833532ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-722054","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-722054","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0929 11:34:01.475124 1081096 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-722054" does not appear in /home/jenkins/minikube-integration/21656-804126/kubeconfig
	E0929 11:34:01.486780 1081096 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/insufficient-storage-722054/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-722054" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-722054
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-722054: (1.698633001s)
--- PASS: TestInsufficientStorage (9.87s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (60.01s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3657357192 start -p running-upgrade-995286 --memory=3072 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3657357192 start -p running-upgrade-995286 --memory=3072 --vm-driver=docker  --container-runtime=docker: (27.795144s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-995286 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-995286 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (25.945593577s)
helpers_test.go:175: Cleaning up "running-upgrade-995286" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-995286
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-995286: (3.295402093s)
--- PASS: TestRunningBinaryUpgrade (60.01s)

                                                
                                    
x
+
TestKubernetesUpgrade (354.19s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-942413 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-942413 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (31.213444702s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-942413
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-942413: (10.753392888s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-942413 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-942413 status --format={{.Host}}: exit status 7 (78.108945ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-942413 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-942413 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m34.693400878s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-942413 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-942413 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-942413 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 106 (77.368917ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-942413] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21656
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21656-804126/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-804126/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-942413
	    minikube start -p kubernetes-upgrade-942413 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9424132 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-942413 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-942413 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-942413 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (34.860525245s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-942413" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-942413
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-942413: (2.439249887s)
--- PASS: TestKubernetesUpgrade (354.19s)

                                                
                                    
x
+
TestMissingContainerUpgrade (98.4s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.45613220 start -p missing-upgrade-002465 --memory=3072 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.45613220 start -p missing-upgrade-002465 --memory=3072 --driver=docker  --container-runtime=docker: (49.438258354s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-002465
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-002465: (1.619904899s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-002465
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-002465 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-002465 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (42.409384376s)
helpers_test.go:175: Cleaning up "missing-upgrade-002465" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-002465
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-002465: (2.244895918s)
--- PASS: TestMissingContainerUpgrade (98.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-889627 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-889627 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 14 (75.494158ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-889627] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21656
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21656-804126/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-804126/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-889627 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-889627 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (39.850650043s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-889627 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (74.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.917200920 start -p stopped-upgrade-927242 --memory=3072 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.917200920 start -p stopped-upgrade-927242 --memory=3072 --vm-driver=docker  --container-runtime=docker: (46.777988449s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.917200920 -p stopped-upgrade-927242 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.917200920 -p stopped-upgrade-927242 stop: (10.770635091s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-927242 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-927242 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (16.624467601s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (74.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-889627 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-889627 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (16.280414763s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-889627 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-889627 status -o json: exit status 2 (292.470301ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-889627","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-889627
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-889627: (1.74882482s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-889627 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-889627 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (7.105851088s)
--- PASS: TestNoKubernetes/serial/Start (7.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-889627 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-889627 "sudo systemctl is-active --quiet service kubelet": exit status 1 (278.135455ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-889627
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-889627: (1.214272663s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (9.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-889627 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-889627 --driver=docker  --container-runtime=docker: (9.077868304s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (9.08s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-927242
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-927242: (1.014036206s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-889627 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-889627 "sudo systemctl is-active --quiet service kubelet": exit status 1 (332.540364ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                    
x
+
TestPause/serial/Start (66.38s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-544497 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-544497 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m6.378055121s)
--- PASS: TestPause/serial/Start (66.38s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (36.27s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-544497 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-544497 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (36.246506587s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (36.27s)

                                                
                                    
x
+
TestPause/serial/Pause (0.51s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-544497 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.51s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-544497 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-544497 --output=json --layout=cluster: exit status 2 (329.1169ms)

                                                
                                                
-- stdout --
	{"Name":"pause-544497","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 10 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-544497","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.51s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-544497 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.51s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.6s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-544497 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.60s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.2s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-544497 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-544497 --alsologtostderr -v=5: (2.201937181s)
--- PASS: TestPause/serial/DeletePaused (2.20s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.65s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-544497
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-544497: exit status 1 (18.74353ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-544497: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (79.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-860313 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-860313 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (1m19.949695759s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (79.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (72.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-662055 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
E0929 11:37:44.075777  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/functional-452443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-662055 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (1m12.988071869s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (72.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-860313 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8ea1a35c-730c-4a16-bf99-b70cf9438755] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0929 11:38:38.341315  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/addons-529420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:38:38.918754  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/skaffold-243191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:38:38.925140  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/skaffold-243191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:38:38.936566  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/skaffold-243191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:38:38.957955  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/skaffold-243191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:38:38.999405  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/skaffold-243191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:38:39.081620  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/skaffold-243191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:38:39.243222  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/skaffold-243191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:38:39.564907  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/skaffold-243191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [8ea1a35c-730c-4a16-bf99-b70cf9438755] Running
E0929 11:38:40.206689  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/skaffold-243191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:38:41.488651  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/skaffold-243191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:38:44.050866  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/skaffold-243191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004217413s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-860313 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-860313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-860313 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-860313 --alsologtostderr -v=3
E0929 11:38:49.173118  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/skaffold-243191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-860313 --alsologtostderr -v=3: (10.87314068s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-662055 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4531ef30-e1f6-46a2-be03-280a8a28836b] Pending
helpers_test.go:352: "busybox" [4531ef30-e1f6-46a2-be03-280a8a28836b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4531ef30-e1f6-46a2-be03-280a8a28836b] Running
E0929 11:38:59.414661  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/skaffold-243191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.002985963s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-662055 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-860313 -n old-k8s-version-860313
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-860313 -n old-k8s-version-860313: exit status 7 (76.710726ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-860313 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (25.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-860313 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-860313 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (25.410808384s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-860313 -n old-k8s-version-860313
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (25.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-662055 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-662055 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-662055 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-662055 --alsologtostderr -v=3: (10.849882193s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-662055 -n no-preload-662055
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-662055 -n no-preload-662055: exit status 7 (73.951808ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-662055 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-662055 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
E0929 11:39:19.896673  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/skaffold-243191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-662055 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (50.636276726s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-662055 -n no-preload-662055
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (51.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-qczq8" [29078f74-afcd-4c79-9b81-80479c209230] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-qczq8" [29078f74-afcd-4c79-9b81-80479c209230] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.004791712s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-qczq8" [29078f74-afcd-4c79-9b81-80479c209230] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004169307s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-860313 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-860313 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-860313 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-860313 -n old-k8s-version-860313
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-860313 -n old-k8s-version-860313: exit status 2 (310.343363ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-860313 -n old-k8s-version-860313
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-860313 -n old-k8s-version-860313: exit status 2 (317.192299ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-860313 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-860313 -n old-k8s-version-860313
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-860313 -n old-k8s-version-860313
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-852541 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-852541 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (1m7.000251391s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (67.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-pzmvq" [8f5e75b6-e25f-4ab1-8a95-6dd227d6dcb3] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003917958s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-pzmvq" [8f5e75b6-e25f-4ab1-8a95-6dd227d6dcb3] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003672226s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-662055 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-662055 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-662055 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-662055 -n no-preload-662055
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-662055 -n no-preload-662055: exit status 2 (354.462265ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-662055 -n no-preload-662055
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-662055 -n no-preload-662055: exit status 2 (332.331728ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-662055 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-662055 -n no-preload-662055
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-662055 -n no-preload-662055
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-583687 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-583687 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (1m12.613079442s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (30.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-543344 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-543344 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (30.689295814s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (30.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-852541 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a72eedf1-72d9-4786-ac1e-b862f386abf8] Pending
helpers_test.go:352: "busybox" [a72eedf1-72d9-4786-ac1e-b862f386abf8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a72eedf1-72d9-4786-ac1e-b862f386abf8] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003534285s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-852541 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-543344 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-543344 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-543344 --alsologtostderr -v=3: (10.908191887s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-852541 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-852541 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-852541 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-852541 --alsologtostderr -v=3: (10.854945178s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-543344 -n newest-cni-543344
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-543344 -n newest-cni-543344: exit status 7 (112.618529ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-543344 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-543344 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-543344 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (16.749059153s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-543344 -n newest-cni-543344
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-852541 -n embed-certs-852541
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-852541 -n embed-certs-852541: exit status 7 (88.269701ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-852541 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (50.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-852541 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-852541 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (50.407286926s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-852541 -n embed-certs-852541
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (50.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (72.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-867825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E0929 11:41:22.780829  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/skaffold-243191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-867825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m12.977087348s)
--- PASS: TestNetworkPlugins/group/auto/Start (72.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-543344 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-543344 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-543344 -n newest-cni-543344
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-543344 -n newest-cni-543344: exit status 2 (326.544521ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-543344 -n newest-cni-543344
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-543344 -n newest-cni-543344: exit status 2 (322.160152ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-543344 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-543344 -n newest-cni-543344
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-543344 -n newest-cni-543344
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (59.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-867825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-867825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (59.636184683s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (59.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-583687 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9cc52dd4-105e-4e08-93a5-527fcc9bff54] Pending
helpers_test.go:352: "busybox" [9cc52dd4-105e-4e08-93a5-527fcc9bff54] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9cc52dd4-105e-4e08-93a5-527fcc9bff54] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 12.004194632s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-583687 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-583687 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-583687 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-583687 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-583687 --alsologtostderr -v=3: (10.97034318s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-583687 -n default-k8s-diff-port-583687
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-583687 -n default-k8s-diff-port-583687: exit status 7 (88.486309ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-583687 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-583687 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-583687 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (47.953721738s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-583687 -n default-k8s-diff-port-583687
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-79vs4" [8271bac4-194d-412d-abc2-2dbd1419d298] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005973373s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-79vs4" [8271bac4-194d-412d-abc2-2dbd1419d298] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00301103s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-852541 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-852541 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-852541 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-852541 -n embed-certs-852541
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-852541 -n embed-certs-852541: exit status 2 (345.723801ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-852541 -n embed-certs-852541
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-852541 -n embed-certs-852541: exit status 2 (336.366975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-852541 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-852541 -n embed-certs-852541
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-852541 -n embed-certs-852541
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (53.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-867825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-867825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (53.531226601s)
--- PASS: TestNetworkPlugins/group/calico/Start (53.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-867825 "pgrep -a kubelet"
I0929 11:42:30.463260  807725 config.go:182] Loaded profile config "auto-867825": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-867825 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-whwg7" [a446d188-7e91-4bd3-b14e-bf192ef9ba88] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-whwg7" [a446d188-7e91-4bd3-b14e-bf192ef9ba88] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004035417s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-lzg87" [01e549d4-1943-4b70-bfbb-d39ef3d671c2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003965226s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-867825 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-867825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-867825 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-867825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-867825 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-m9zn6" [9e4e7b77-dcd5-428f-9673-9eb6c7c17c80] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-m9zn6" [9e4e7b77-dcd5-428f-9673-9eb6c7c17c80] Running
E0929 11:42:44.076052  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/functional-452443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.006173976s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-867825 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-867825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-867825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-v7jrs" [2c4c0166-9f27-47e9-bb4f-fe23f3eb6987] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005651451s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-v7jrs" [2c4c0166-9f27-47e9-bb4f-fe23f3eb6987] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005057315s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-583687 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (51.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-867825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-867825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (51.303524326s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (51.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-583687 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-583687 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-583687 -n default-k8s-diff-port-583687
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-583687 -n default-k8s-diff-port-583687: exit status 2 (489.261663ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-583687 -n default-k8s-diff-port-583687
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-583687 -n default-k8s-diff-port-583687: exit status 2 (449.069274ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-583687 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-583687 -n default-k8s-diff-port-583687
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-583687 -n default-k8s-diff-port-583687
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (69.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-867825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-867825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m9.068755689s)
--- PASS: TestNetworkPlugins/group/false/Start (69.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (71.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-867825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-867825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m11.181565894s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (71.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-tx8sv" [63910186-8ae0-4886-a882-dafae4176b43] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004455605s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-867825 "pgrep -a kubelet"
I0929 11:43:22.701576  807725 config.go:182] Loaded profile config "calico-867825": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-867825 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jkxzw" [bc66cf46-977e-49a3-80e0-8cd23e85049e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jkxzw" [bc66cf46-977e-49a3-80e0-8cd23e85049e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004201197s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-867825 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-867825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-867825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-867825 "pgrep -a kubelet"
I0929 11:43:52.593388  807725 config.go:182] Loaded profile config "custom-flannel-867825": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-867825 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gqjzm" [f4529481-612e-4fd5-9335-dbf3697f387a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gqjzm" [f4529481-612e-4fd5-9335-dbf3697f387a] Running
E0929 11:43:57.388542  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/no-preload-662055/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:43:59.949892  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/no-preload-662055/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003366658s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (44.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-867825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E0929 11:43:56.106733  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/no-preload-662055/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:43:56.560608  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/old-k8s-version-860313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-867825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (44.914185127s)
--- PASS: TestNetworkPlugins/group/flannel/Start (44.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-867825 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-867825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-867825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-867825 "pgrep -a kubelet"
I0929 11:44:18.682127  807725 config.go:182] Loaded profile config "false-867825": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-867825 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-r4h2t" [05871ece-3568-4e18-b4b8-3c3a9d2daacf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-r4h2t" [05871ece-3568-4e18-b4b8-3c3a9d2daacf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 9.004438091s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (41.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-867825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-867825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (41.359850204s)
--- PASS: TestNetworkPlugins/group/bridge/Start (41.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-867825 "pgrep -a kubelet"
I0929 11:44:26.545074  807725 config.go:182] Loaded profile config "enable-default-cni-867825": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-867825 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dmdsd" [b4df6720-6fad-405c-9a31-8b970cde8704] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-dmdsd" [b4df6720-6fad-405c-9a31-8b970cde8704] Running
E0929 11:44:35.795917  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/no-preload-662055/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.00423754s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-867825 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-867825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-867825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-867825 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-867825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-867825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-6j2z7" [e88a3dd1-9870-4a84-894b-1a36fca76a94] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00631053s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-867825 "pgrep -a kubelet"
I0929 11:44:47.331925  807725 config.go:182] Loaded profile config "flannel-867825": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-867825 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5mp6x" [90cea4a4-c982-4e22-b78c-c6652a25d6ac] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5mp6x" [90cea4a4-c982-4e22-b78c-c6652a25d6ac] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004543104s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (71.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-867825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-867825 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m11.838488586s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (71.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-867825 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-867825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-867825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0929 11:44:58.003994  807725 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/old-k8s-version-860313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-867825 "pgrep -a kubelet"
I0929 11:45:06.114582  807725 config.go:182] Loaded profile config "bridge-867825": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-867825 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-cv9v7" [c7375c7a-4152-4774-989b-e8f80c961cb1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-cv9v7" [c7375c7a-4152-4774-989b-e8f80c961cb1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004493955s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-867825 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-867825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-867825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-867825 "pgrep -a kubelet"
I0929 11:46:01.177488  807725 config.go:182] Loaded profile config "kubenet-867825": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-867825 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5s5nb" [3a47bfc0-0f10-497c-92bb-88b199623739] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5s5nb" [3a47bfc0-0f10-497c-92bb-88b199623739] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.004836317s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-867825 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-867825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-867825 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.12s)

                                                
                                    

Test skip (22/346)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-256364" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-256364
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-867825 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-867825

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-867825

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-867825

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-867825

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-867825

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-867825

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-867825

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-867825

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-867825

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-867825

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-867825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-867825"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-867825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-867825"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-867825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-867825"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-867825

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-867825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-867825"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-867825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-867825"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-867825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-867825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-867825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-867825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-867825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-867825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-867825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-867825" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-867825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-867825"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-867825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-867825"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-867825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-867825"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-867825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-867825"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-867825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-867825"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-867825

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-867825

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-867825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-867825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-867825

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-867825

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-867825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-867825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-867825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-867825" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-867825" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-867825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-867825"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-867825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-867825"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-867825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-867825"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-867825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-867825"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-867825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-867825"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21656-804126/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 11:35:59 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-544497
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21656-804126/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 11:36:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: running-upgrade-995286
contexts:
- context:
cluster: pause-544497
extensions:
- extension:
last-update: Mon, 29 Sep 2025 11:35:59 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-544497
name: pause-544497
- context:
cluster: running-upgrade-995286
user: running-upgrade-995286
name: running-upgrade-995286
current-context: running-upgrade-995286
kind: Config
users:
- name: pause-544497
user:
client-certificate: /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/pause-544497/client.crt
client-key: /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/pause-544497/client.key
- name: running-upgrade-995286
user:
client-certificate: /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/running-upgrade-995286/client.crt
client-key: /home/jenkins/minikube-integration/21656-804126/.minikube/profiles/running-upgrade-995286/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-867825

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-867825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-867825"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-867825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-867825"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-867825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-867825"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-867825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-867825"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-867825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-867825"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-867825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-867825"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-867825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-867825"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-867825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-867825"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-867825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-867825"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-867825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-867825"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-867825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-867825"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-867825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-867825"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-867825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-867825"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-867825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-867825"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-867825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-867825"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-867825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-867825"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-867825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-867825"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-867825" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-867825"

                                                
                                                
----------------------- debugLogs end: cilium-867825 [took: 4.011151762s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-867825" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-867825
--- SKIP: TestNetworkPlugins/group/cilium (4.21s)

                                                
                                    
Copied to clipboard