Test Report: Docker_Linux_crio 21490

                    
                      ce0ab003608e00fd868941ed02a835e21158493a:2025-09-04:41284
                    
                

Test fail (16/325)

x
+
TestAddons/parallel/Ingress (491.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-049370 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-049370 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-049370 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [fd6a62c3-3f28-47de-b93e-6a4222d72423] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:337: TestAddons/parallel/Ingress: WARNING: pod list for "default" "run=nginx" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:252: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:252: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-049370 -n addons-049370
addons_test.go:252: TestAddons/parallel/Ingress: showing logs for failed pods as of 2025-09-04 21:06:30.279231486 +0000 UTC m=+667.122834093
addons_test.go:252: (dbg) Run:  kubectl --context addons-049370 describe po nginx -n default
addons_test.go:252: (dbg) kubectl --context addons-049370 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-049370/192.168.49.2
Start Time:       Thu, 04 Sep 2025 20:58:29 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.24
IPs:
IP:  10.244.0.24
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6ptm9 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-6ptm9:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  8m1s                 default-scheduler  Successfully assigned default/nginx to addons-049370
Warning  Failed     7m29s                kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    2m43s (x4 over 8m)   kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     58s (x4 over 7m29s)  kubelet            Error: ErrImagePull
Warning  Failed     58s (x3 over 5m34s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    2s (x9 over 7m29s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     2s (x9 over 7m29s)   kubelet            Error: ImagePullBackOff
addons_test.go:252: (dbg) Run:  kubectl --context addons-049370 logs nginx -n default
addons_test.go:252: (dbg) Non-zero exit: kubectl --context addons-049370 logs nginx -n default: exit status 1 (59.729402ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:252: kubectl --context addons-049370 logs nginx -n default: exit status 1
addons_test.go:253: failed waiting for nginx pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-049370
helpers_test.go:243: (dbg) docker inspect addons-049370:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5caec540cec027b6b9e44289fa39b99a9d281f88769b5a3b10a72fbc2efdcfc3",
	        "Created": "2025-09-04T20:55:59.262503813Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 390267,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-04T20:55:59.29310334Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:26724cb1013292a869503ea8c35fa5a932e6ec3e4e85e318411373ae6a24478b",
	        "ResolvConfPath": "/var/lib/docker/containers/5caec540cec027b6b9e44289fa39b99a9d281f88769b5a3b10a72fbc2efdcfc3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5caec540cec027b6b9e44289fa39b99a9d281f88769b5a3b10a72fbc2efdcfc3/hostname",
	        "HostsPath": "/var/lib/docker/containers/5caec540cec027b6b9e44289fa39b99a9d281f88769b5a3b10a72fbc2efdcfc3/hosts",
	        "LogPath": "/var/lib/docker/containers/5caec540cec027b6b9e44289fa39b99a9d281f88769b5a3b10a72fbc2efdcfc3/5caec540cec027b6b9e44289fa39b99a9d281f88769b5a3b10a72fbc2efdcfc3-json.log",
	        "Name": "/addons-049370",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-049370:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-049370",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5caec540cec027b6b9e44289fa39b99a9d281f88769b5a3b10a72fbc2efdcfc3",
	                "LowerDir": "/var/lib/docker/overlay2/dc05b76b54c7cfbf7fa4d96813c5d335fe5daf2168b1a69554f99d939c584ee0-init/diff:/var/lib/docker/overlay2/fcc29a201369e5fb579bfafdafe90606e5b86bd2f4b52beb7f6a97f7e955214d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dc05b76b54c7cfbf7fa4d96813c5d335fe5daf2168b1a69554f99d939c584ee0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dc05b76b54c7cfbf7fa4d96813c5d335fe5daf2168b1a69554f99d939c584ee0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dc05b76b54c7cfbf7fa4d96813c5d335fe5daf2168b1a69554f99d939c584ee0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-049370",
	                "Source": "/var/lib/docker/volumes/addons-049370/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-049370",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-049370",
	                "name.minikube.sigs.k8s.io": "addons-049370",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ebe38700b80a638159b3489df03c5870e9f15ecf00ad219d1d9b3fbc49acec55",
	            "SandboxKey": "/var/run/docker/netns/ebe38700b80a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-049370": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:41:22:73:0f:f1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2048bdf288b9f197869aef65f41d479e8afce6e3ad28d597acd24bc87d544c41",
	                    "EndpointID": "84d0e0934b5175bdbf5a7fed011cc5c5fd5e6125bf967cd744e715e3f5eb7d74",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-049370",
	                        "5caec540cec0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-049370 -n addons-049370
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-049370 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-049370 logs -n 25: (1.106693864s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-807406                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-807406   │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │ 04 Sep 25 20:55 UTC │
	│ start   │ --download-only -p download-docker-306069 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-306069 │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │                     │
	│ delete  │ -p download-docker-306069                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-306069 │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │ 04 Sep 25 20:55 UTC │
	│ start   │ --download-only -p binary-mirror-563304 --alsologtostderr --binary-mirror http://127.0.0.1:41655 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-563304   │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │                     │
	│ delete  │ -p binary-mirror-563304                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-563304   │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │ 04 Sep 25 20:55 UTC │
	│ addons  │ disable dashboard -p addons-049370                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │                     │
	│ addons  │ enable dashboard -p addons-049370                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │                     │
	│ start   │ -p addons-049370 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │ 04 Sep 25 20:58 UTC │
	│ addons  │ addons-049370 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:58 UTC │ 04 Sep 25 20:58 UTC │
	│ addons  │ addons-049370 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:58 UTC │ 04 Sep 25 20:58 UTC │
	│ addons  │ enable headlamp -p addons-049370 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:58 UTC │ 04 Sep 25 20:58 UTC │
	│ addons  │ addons-049370 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:58 UTC │ 04 Sep 25 20:58 UTC │
	│ addons  │ addons-049370 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:58 UTC │ 04 Sep 25 20:58 UTC │
	│ addons  │ addons-049370 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:58 UTC │ 04 Sep 25 20:58 UTC │
	│ addons  │ addons-049370 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:58 UTC │ 04 Sep 25 20:58 UTC │
	│ addons  │ addons-049370 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:58 UTC │ 04 Sep 25 20:59 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-049370                                                                                                                                                                                                                                                                                                                                                                                           │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:59 UTC │ 04 Sep 25 20:59 UTC │
	│ addons  │ addons-049370 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:59 UTC │ 04 Sep 25 20:59 UTC │
	│ ip      │ addons-049370 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:59 UTC │ 04 Sep 25 20:59 UTC │
	│ addons  │ addons-049370 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:59 UTC │ 04 Sep 25 20:59 UTC │
	│ addons  │ addons-049370 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:59 UTC │ 04 Sep 25 20:59 UTC │
	│ addons  │ addons-049370 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:59 UTC │ 04 Sep 25 20:59 UTC │
	│ addons  │ addons-049370 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 21:04 UTC │ 04 Sep 25 21:04 UTC │
	│ addons  │ addons-049370 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 21:05 UTC │ 04 Sep 25 21:05 UTC │
	│ addons  │ addons-049370 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 21:05 UTC │ 04 Sep 25 21:05 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 20:55:35
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 20:55:35.931187  389648 out.go:360] Setting OutFile to fd 1 ...
	I0904 20:55:35.931440  389648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 20:55:35.931451  389648 out.go:374] Setting ErrFile to fd 2...
	I0904 20:55:35.931458  389648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 20:55:35.931653  389648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-384635/.minikube/bin
	I0904 20:55:35.932252  389648 out.go:368] Setting JSON to false
	I0904 20:55:35.933194  389648 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9485,"bootTime":1757009851,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 20:55:35.933295  389648 start.go:140] virtualization: kvm guest
	I0904 20:55:35.935053  389648 out.go:179] * [addons-049370] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 20:55:35.936502  389648 out.go:179]   - MINIKUBE_LOCATION=21490
	I0904 20:55:35.936515  389648 notify.go:220] Checking for updates...
	I0904 20:55:35.938589  389648 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 20:55:35.939875  389648 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21490-384635/kubeconfig
	I0904 20:55:35.941016  389648 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-384635/.minikube
	I0904 20:55:35.942120  389648 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 20:55:35.943340  389648 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 20:55:35.944678  389648 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 20:55:35.967955  389648 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 20:55:35.968038  389648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 20:55:36.013884  389648 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:48 SystemTime:2025-09-04 20:55:36.00384503 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 20:55:36.013990  389648 docker.go:318] overlay module found
	I0904 20:55:36.015880  389648 out.go:179] * Using the docker driver based on user configuration
	I0904 20:55:36.017259  389648 start.go:304] selected driver: docker
	I0904 20:55:36.017279  389648 start.go:918] validating driver "docker" against <nil>
	I0904 20:55:36.017301  389648 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 20:55:36.018181  389648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 20:55:36.061743  389648 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:48 SystemTime:2025-09-04 20:55:36.053555345 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 20:55:36.061946  389648 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0904 20:55:36.062186  389648 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 20:55:36.063851  389648 out.go:179] * Using Docker driver with root privileges
	I0904 20:55:36.065032  389648 cni.go:84] Creating CNI manager for ""
	I0904 20:55:36.065096  389648 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 20:55:36.065109  389648 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0904 20:55:36.065189  389648 start.go:348] cluster config:
	{Name:addons-049370 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-049370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0904 20:55:36.066545  389648 out.go:179] * Starting "addons-049370" primary control-plane node in "addons-049370" cluster
	I0904 20:55:36.067696  389648 cache.go:123] Beginning downloading kic base image for docker with crio
	I0904 20:55:36.068952  389648 out.go:179] * Pulling base image v0.0.47-1756116447-21413 ...
	I0904 20:55:36.070027  389648 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 20:55:36.070067  389648 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21490-384635/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0904 20:55:36.070084  389648 cache.go:58] Caching tarball of preloaded images
	I0904 20:55:36.070129  389648 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 in local docker daemon
	I0904 20:55:36.070184  389648 preload.go:172] Found /home/jenkins/minikube-integration/21490-384635/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0904 20:55:36.070196  389648 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0904 20:55:36.070509  389648 profile.go:143] Saving config to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/config.json ...
	I0904 20:55:36.070535  389648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/config.json: {Name:mkeaddf16ea076f194194c7e6e0eb8ad847648bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:55:36.085707  389648 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 to local cache
	I0904 20:55:36.085814  389648 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 in local cache directory
	I0904 20:55:36.085830  389648 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 in local cache directory, skipping pull
	I0904 20:55:36.085834  389648 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 exists in cache, skipping pull
	I0904 20:55:36.085841  389648 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 as a tarball
	I0904 20:55:36.085848  389648 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 from local cache
	I0904 20:55:47.569774  389648 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 from cached tarball
	I0904 20:55:47.569822  389648 cache.go:232] Successfully downloaded all kic artifacts
	I0904 20:55:47.569872  389648 start.go:360] acquireMachinesLock for addons-049370: {Name:mk8e52f32278895920c6de02ca736f9f45438008 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 20:55:47.569963  389648 start.go:364] duration metric: took 71.514µs to acquireMachinesLock for "addons-049370"
	I0904 20:55:47.569986  389648 start.go:93] Provisioning new machine with config: &{Name:addons-049370 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-049370 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 20:55:47.570051  389648 start.go:125] createHost starting for "" (driver="docker")
	I0904 20:55:47.571722  389648 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0904 20:55:47.571956  389648 start.go:159] libmachine.API.Create for "addons-049370" (driver="docker")
	I0904 20:55:47.571986  389648 client.go:168] LocalClient.Create starting
	I0904 20:55:47.572093  389648 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem
	I0904 20:55:47.750984  389648 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/cert.pem
	I0904 20:55:47.850792  389648 cli_runner.go:164] Run: docker network inspect addons-049370 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0904 20:55:47.867272  389648 cli_runner.go:211] docker network inspect addons-049370 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0904 20:55:47.867344  389648 network_create.go:284] running [docker network inspect addons-049370] to gather additional debugging logs...
	I0904 20:55:47.867369  389648 cli_runner.go:164] Run: docker network inspect addons-049370
	W0904 20:55:47.882593  389648 cli_runner.go:211] docker network inspect addons-049370 returned with exit code 1
	I0904 20:55:47.882619  389648 network_create.go:287] error running [docker network inspect addons-049370]: docker network inspect addons-049370: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-049370 not found
	I0904 20:55:47.882643  389648 network_create.go:289] output of [docker network inspect addons-049370]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-049370 not found
	
	** /stderr **
	I0904 20:55:47.882767  389648 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0904 20:55:47.897896  389648 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f29240}
	I0904 20:55:47.897941  389648 network_create.go:124] attempt to create docker network addons-049370 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0904 20:55:47.897989  389648 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-049370 addons-049370
	I0904 20:55:47.946511  389648 network_create.go:108] docker network addons-049370 192.168.49.0/24 created
	I0904 20:55:47.946541  389648 kic.go:121] calculated static IP "192.168.49.2" for the "addons-049370" container
	I0904 20:55:47.946616  389648 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0904 20:55:47.961507  389648 cli_runner.go:164] Run: docker volume create addons-049370 --label name.minikube.sigs.k8s.io=addons-049370 --label created_by.minikube.sigs.k8s.io=true
	I0904 20:55:47.977348  389648 oci.go:103] Successfully created a docker volume addons-049370
	I0904 20:55:47.977414  389648 cli_runner.go:164] Run: docker run --rm --name addons-049370-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-049370 --entrypoint /usr/bin/test -v addons-049370:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 -d /var/lib
	I0904 20:55:54.908931  389648 cli_runner.go:217] Completed: docker run --rm --name addons-049370-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-049370 --entrypoint /usr/bin/test -v addons-049370:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 -d /var/lib: (6.931464681s)
	I0904 20:55:54.908963  389648 oci.go:107] Successfully prepared a docker volume addons-049370
	I0904 20:55:54.908988  389648 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 20:55:54.909014  389648 kic.go:194] Starting extracting preloaded images to volume ...
	I0904 20:55:54.909085  389648 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21490-384635/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-049370:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 -I lz4 -xf /preloaded.tar -C /extractDir
	I0904 20:55:59.203486  389648 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21490-384635/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-049370:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 -I lz4 -xf /preloaded.tar -C /extractDir: (4.294349299s)
	I0904 20:55:59.203526  389648 kic.go:203] duration metric: took 4.294508066s to extract preloaded images to volume ...
	W0904 20:55:59.203673  389648 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0904 20:55:59.203816  389648 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0904 20:55:59.248150  389648 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-049370 --name addons-049370 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-049370 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-049370 --network addons-049370 --ip 192.168.49.2 --volume addons-049370:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9
	I0904 20:55:59.483162  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Running}}
	I0904 20:55:59.500560  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:55:59.519189  389648 cli_runner.go:164] Run: docker exec addons-049370 stat /var/lib/dpkg/alternatives/iptables
	I0904 20:55:59.559150  389648 oci.go:144] the created container "addons-049370" has a running status.
	I0904 20:55:59.559182  389648 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa...
	I0904 20:55:59.730819  389648 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0904 20:55:59.749901  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:55:59.769336  389648 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0904 20:55:59.769365  389648 kic_runner.go:114] Args: [docker exec --privileged addons-049370 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0904 20:55:59.858697  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:55:59.878986  389648 machine.go:93] provisionDockerMachine start ...
	I0904 20:55:59.879111  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:55:59.900388  389648 main.go:141] libmachine: Using SSH client type: native
	I0904 20:55:59.900618  389648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33145 <nil> <nil>}
	I0904 20:55:59.900630  389648 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 20:56:00.092134  389648 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-049370
	
	I0904 20:56:00.092166  389648 ubuntu.go:182] provisioning hostname "addons-049370"
	I0904 20:56:00.092222  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:00.110942  389648 main.go:141] libmachine: Using SSH client type: native
	I0904 20:56:00.111171  389648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33145 <nil> <nil>}
	I0904 20:56:00.111192  389648 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-049370 && echo "addons-049370" | sudo tee /etc/hostname
	I0904 20:56:00.235028  389648 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-049370
	
	I0904 20:56:00.235115  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:00.254182  389648 main.go:141] libmachine: Using SSH client type: native
	I0904 20:56:00.254444  389648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33145 <nil> <nil>}
	I0904 20:56:00.254463  389648 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-049370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-049370/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-049370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 20:56:00.364487  389648 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 20:56:00.364528  389648 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21490-384635/.minikube CaCertPath:/home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21490-384635/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21490-384635/.minikube}
	I0904 20:56:00.364564  389648 ubuntu.go:190] setting up certificates
	I0904 20:56:00.364581  389648 provision.go:84] configureAuth start
	I0904 20:56:00.364638  389648 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-049370
	I0904 20:56:00.380933  389648 provision.go:143] copyHostCerts
	I0904 20:56:00.381007  389648 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21490-384635/.minikube/ca.pem (1078 bytes)
	I0904 20:56:00.381110  389648 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21490-384635/.minikube/cert.pem (1123 bytes)
	I0904 20:56:00.381171  389648 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21490-384635/.minikube/key.pem (1675 bytes)
	I0904 20:56:00.381291  389648 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21490-384635/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca-key.pem org=jenkins.addons-049370 san=[127.0.0.1 192.168.49.2 addons-049370 localhost minikube]
	I0904 20:56:00.582774  389648 provision.go:177] copyRemoteCerts
	I0904 20:56:00.582833  389648 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 20:56:00.582888  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:00.600896  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:00.685189  389648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0904 20:56:00.706872  389648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0904 20:56:00.727318  389648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0904 20:56:00.747581  389648 provision.go:87] duration metric: took 382.988372ms to configureAuth
	I0904 20:56:00.747609  389648 ubuntu.go:206] setting minikube options for container-runtime
	I0904 20:56:00.747766  389648 config.go:182] Loaded profile config "addons-049370": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 20:56:00.747906  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:00.764149  389648 main.go:141] libmachine: Using SSH client type: native
	I0904 20:56:00.764350  389648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33145 <nil> <nil>}
	I0904 20:56:00.764368  389648 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0904 20:56:00.958932  389648 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0904 20:56:00.958968  389648 machine.go:96] duration metric: took 1.079954584s to provisionDockerMachine
	I0904 20:56:00.958982  389648 client.go:171] duration metric: took 13.386987071s to LocalClient.Create
	I0904 20:56:00.959009  389648 start.go:167] duration metric: took 13.387053802s to libmachine.API.Create "addons-049370"
	I0904 20:56:00.959025  389648 start.go:293] postStartSetup for "addons-049370" (driver="docker")
	I0904 20:56:00.959040  389648 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 20:56:00.959109  389648 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 20:56:00.959158  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:00.975608  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:01.061278  389648 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 20:56:01.064210  389648 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0904 20:56:01.064237  389648 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0904 20:56:01.064244  389648 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0904 20:56:01.064251  389648 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0904 20:56:01.064263  389648 filesync.go:126] Scanning /home/jenkins/minikube-integration/21490-384635/.minikube/addons for local assets ...
	I0904 20:56:01.064321  389648 filesync.go:126] Scanning /home/jenkins/minikube-integration/21490-384635/.minikube/files for local assets ...
	I0904 20:56:01.064347  389648 start.go:296] duration metric: took 105.314476ms for postStartSetup
	I0904 20:56:01.064647  389648 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-049370
	I0904 20:56:01.081390  389648 profile.go:143] Saving config to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/config.json ...
	I0904 20:56:01.081619  389648 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 20:56:01.081659  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:01.098242  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:01.177520  389648 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0904 20:56:01.181443  389648 start.go:128] duration metric: took 13.611378177s to createHost
	I0904 20:56:01.181464  389648 start.go:83] releasing machines lock for "addons-049370", held for 13.611489751s
	I0904 20:56:01.181518  389648 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-049370
	I0904 20:56:01.197665  389648 ssh_runner.go:195] Run: cat /version.json
	I0904 20:56:01.197712  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:01.197747  389648 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 20:56:01.197832  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:01.217406  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:01.217960  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:01.369596  389648 ssh_runner.go:195] Run: systemctl --version
	I0904 20:56:01.373474  389648 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0904 20:56:01.509565  389648 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0904 20:56:01.513834  389648 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 20:56:01.530180  389648 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0904 20:56:01.530256  389648 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 20:56:01.553751  389648 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0904 20:56:01.553778  389648 start.go:495] detecting cgroup driver to use...
	I0904 20:56:01.553812  389648 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0904 20:56:01.553868  389648 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 20:56:01.567182  389648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 20:56:01.576378  389648 docker.go:218] disabling cri-docker service (if available) ...
	I0904 20:56:01.576432  389648 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0904 20:56:01.587988  389648 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0904 20:56:01.599829  389648 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0904 20:56:01.673115  389648 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0904 20:56:01.753644  389648 docker.go:234] disabling docker service ...
	I0904 20:56:01.753708  389648 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0904 20:56:01.770449  389648 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0904 20:56:01.780079  389648 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0904 20:56:01.852634  389648 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0904 20:56:01.929656  389648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 20:56:01.939388  389648 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 20:56:01.953483  389648 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0904 20:56:01.953533  389648 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:56:01.961514  389648 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0904 20:56:01.961581  389648 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:56:01.969587  389648 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:56:01.977328  389648 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:56:01.985460  389648 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 20:56:01.992893  389648 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:56:02.000897  389648 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:56:02.014229  389648 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:56:02.022636  389648 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 20:56:02.029801  389648 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 20:56:02.036815  389648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 20:56:02.107470  389648 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0904 20:56:02.204181  389648 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0904 20:56:02.204269  389648 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0904 20:56:02.207556  389648 start.go:563] Will wait 60s for crictl version
	I0904 20:56:02.207613  389648 ssh_runner.go:195] Run: which crictl
	I0904 20:56:02.210531  389648 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 20:56:02.242395  389648 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0904 20:56:02.242466  389648 ssh_runner.go:195] Run: crio --version
	I0904 20:56:02.275988  389648 ssh_runner.go:195] Run: crio --version
	I0904 20:56:02.310411  389648 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0904 20:56:02.311905  389648 cli_runner.go:164] Run: docker network inspect addons-049370 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0904 20:56:02.327725  389648 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0904 20:56:02.331056  389648 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 20:56:02.340959  389648 kubeadm.go:875] updating cluster {Name:addons-049370 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-049370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0904 20:56:02.341073  389648 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 20:56:02.341116  389648 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 20:56:02.405091  389648 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 20:56:02.405113  389648 crio.go:433] Images already preloaded, skipping extraction
	I0904 20:56:02.405157  389648 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 20:56:02.435602  389648 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 20:56:02.435624  389648 cache_images.go:85] Images are preloaded, skipping loading
	I0904 20:56:02.435633  389648 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0904 20:56:02.435742  389648 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-049370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-049370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0904 20:56:02.435801  389648 ssh_runner.go:195] Run: crio config
	I0904 20:56:02.475208  389648 cni.go:84] Creating CNI manager for ""
	I0904 20:56:02.475229  389648 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 20:56:02.475242  389648 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0904 20:56:02.475263  389648 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-049370 NodeName:addons-049370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0904 20:56:02.475385  389648 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-049370"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0904 20:56:02.475439  389648 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0904 20:56:02.483384  389648 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 20:56:02.483434  389648 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0904 20:56:02.490999  389648 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0904 20:56:02.506097  389648 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 20:56:02.521263  389648 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I0904 20:56:02.536086  389648 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0904 20:56:02.539041  389648 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 20:56:02.548083  389648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 20:56:02.620733  389648 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 20:56:02.632098  389648 certs.go:68] Setting up /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370 for IP: 192.168.49.2
	I0904 20:56:02.632134  389648 certs.go:194] generating shared ca certs ...
	I0904 20:56:02.632155  389648 certs.go:226] acquiring lock for ca certs: {Name:mkab35eaf89c739a644dd45428dbbd1b30c489d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:02.632303  389648 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21490-384635/.minikube/ca.key
	I0904 20:56:02.772055  389648 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21490-384635/.minikube/ca.crt ...
	I0904 20:56:02.772085  389648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/ca.crt: {Name:mk404ac6f8708b208ba3c17564d32d1c6e1f2d80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:02.772267  389648 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21490-384635/.minikube/ca.key ...
	I0904 20:56:02.772279  389648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/ca.key: {Name:mk0f029ece1be42b4490f030d22d0963e0de5ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:02.772354  389648 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21490-384635/.minikube/proxy-client-ca.key
	I0904 20:56:03.010123  389648 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21490-384635/.minikube/proxy-client-ca.crt ...
	I0904 20:56:03.010158  389648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/proxy-client-ca.crt: {Name:mk7836ca5bbc78d58e9f795ae3bd0cc1b3f94116 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:03.010336  389648 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21490-384635/.minikube/proxy-client-ca.key ...
	I0904 20:56:03.010350  389648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/proxy-client-ca.key: {Name:mk4a37f8d0fc0b197f0796089f579493b4ab1519 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:03.010419  389648 certs.go:256] generating profile certs ...
	I0904 20:56:03.010492  389648 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.key
	I0904 20:56:03.010508  389648 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt with IP's: []
	I0904 20:56:03.189084  389648 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt ...
	I0904 20:56:03.189116  389648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt: {Name:mkd7ec52fc00b41923df1429201e9537ed50a6ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:03.189278  389648 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.key ...
	I0904 20:56:03.189288  389648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.key: {Name:mk02506672d1abc668baddf35412038560ece7f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:03.189360  389648 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/apiserver.key.b5f53ae8
	I0904 20:56:03.189379  389648 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/apiserver.crt.b5f53ae8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0904 20:56:03.499646  389648 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/apiserver.crt.b5f53ae8 ...
	I0904 20:56:03.499681  389648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/apiserver.crt.b5f53ae8: {Name:mk8c9ae053706a4ea8f20f5fd17de3c20f5c4e30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:03.499842  389648 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/apiserver.key.b5f53ae8 ...
	I0904 20:56:03.499857  389648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/apiserver.key.b5f53ae8: {Name:mk9c5b0ad197ad61ad1f2b3b99dfc9c995bc0acb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:03.499927  389648 certs.go:381] copying /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/apiserver.crt.b5f53ae8 -> /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/apiserver.crt
	I0904 20:56:03.500017  389648 certs.go:385] copying /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/apiserver.key.b5f53ae8 -> /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/apiserver.key
	I0904 20:56:03.500063  389648 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/proxy-client.key
	I0904 20:56:03.500080  389648 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/proxy-client.crt with IP's: []
	I0904 20:56:04.206716  389648 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/proxy-client.crt ...
	I0904 20:56:04.206749  389648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/proxy-client.crt: {Name:mk2210684251083ae7ccb41ecbd3350906b53776 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:04.206912  389648 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/proxy-client.key ...
	I0904 20:56:04.206925  389648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/proxy-client.key: {Name:mk24ebbc3c1cb4ca4f1f7bb1a93ec6d982e6058d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:04.207093  389648 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca-key.pem (1679 bytes)
	I0904 20:56:04.207128  389648 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem (1078 bytes)
	I0904 20:56:04.207156  389648 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/cert.pem (1123 bytes)
	I0904 20:56:04.207178  389648 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/key.pem (1675 bytes)
	I0904 20:56:04.207825  389648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 20:56:04.229255  389648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0904 20:56:04.249412  389648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 20:56:04.269463  389648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0904 20:56:04.289100  389648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0904 20:56:04.309546  389648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0904 20:56:04.330101  389648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 20:56:04.350231  389648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0904 20:56:04.370529  389648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 20:56:04.390259  389648 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0904 20:56:04.404879  389648 ssh_runner.go:195] Run: openssl version
	I0904 20:56:04.409558  389648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 20:56:04.417330  389648 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 20:56:04.420173  389648 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  4 20:56 /usr/share/ca-certificates/minikubeCA.pem
	I0904 20:56:04.420213  389648 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 20:56:04.426284  389648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 20:56:04.434253  389648 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 20:56:04.437015  389648 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0904 20:56:04.437090  389648 kubeadm.go:392] StartCluster: {Name:addons-049370 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-049370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 20:56:04.437155  389648 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0904 20:56:04.437197  389648 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0904 20:56:04.468884  389648 cri.go:89] found id: ""
	I0904 20:56:04.468950  389648 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0904 20:56:04.476436  389648 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0904 20:56:04.483832  389648 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0904 20:56:04.483872  389648 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0904 20:56:04.491177  389648 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0904 20:56:04.491196  389648 kubeadm.go:157] found existing configuration files:
	
	I0904 20:56:04.491247  389648 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0904 20:56:04.498385  389648 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0904 20:56:04.498431  389648 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0904 20:56:04.505641  389648 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0904 20:56:04.512961  389648 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0904 20:56:04.512996  389648 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0904 20:56:04.519960  389648 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0904 20:56:04.527106  389648 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0904 20:56:04.527145  389648 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0904 20:56:04.534344  389648 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0904 20:56:04.541535  389648 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0904 20:56:04.541584  389648 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0904 20:56:04.548873  389648 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0904 20:56:04.583125  389648 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0904 20:56:04.583201  389648 kubeadm.go:310] [preflight] Running pre-flight checks
	I0904 20:56:04.597681  389648 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0904 20:56:04.597741  389648 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1083-gcp
	I0904 20:56:04.597803  389648 kubeadm.go:310] OS: Linux
	I0904 20:56:04.597915  389648 kubeadm.go:310] CGROUPS_CPU: enabled
	I0904 20:56:04.597990  389648 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0904 20:56:04.598061  389648 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0904 20:56:04.598158  389648 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0904 20:56:04.598223  389648 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0904 20:56:04.598271  389648 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0904 20:56:04.598336  389648 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0904 20:56:04.598406  389648 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0904 20:56:04.598474  389648 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0904 20:56:04.647143  389648 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0904 20:56:04.647322  389648 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0904 20:56:04.647453  389648 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0904 20:56:04.653687  389648 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0904 20:56:04.656516  389648 out.go:252]   - Generating certificates and keys ...
	I0904 20:56:04.656617  389648 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0904 20:56:04.656693  389648 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0904 20:56:04.868159  389648 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0904 20:56:05.089300  389648 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0904 20:56:05.307580  389648 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0904 20:56:05.541675  389648 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0904 20:56:05.660773  389648 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0904 20:56:05.660952  389648 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-049370 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0904 20:56:05.874335  389648 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0904 20:56:05.874525  389648 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-049370 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0904 20:56:06.201674  389648 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0904 20:56:06.395227  389648 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0904 20:56:06.658231  389648 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0904 20:56:06.658358  389648 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0904 20:56:06.844487  389648 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0904 20:56:07.298671  389648 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0904 20:56:07.543710  389648 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0904 20:56:07.923783  389648 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0904 20:56:08.223748  389648 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0904 20:56:08.224259  389648 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0904 20:56:08.226815  389648 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0904 20:56:08.228639  389648 out.go:252]   - Booting up control plane ...
	I0904 20:56:08.228790  389648 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0904 20:56:08.228909  389648 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0904 20:56:08.228988  389648 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0904 20:56:08.237068  389648 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0904 20:56:08.237206  389648 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0904 20:56:08.242388  389648 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0904 20:56:08.242635  389648 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0904 20:56:08.242706  389648 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0904 20:56:08.316793  389648 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0904 20:56:08.316922  389648 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0904 20:56:08.818465  389648 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.780617ms
	I0904 20:56:08.822350  389648 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0904 20:56:08.822466  389648 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0904 20:56:08.822584  389648 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0904 20:56:08.822692  389648 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0904 20:56:10.827725  389648 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.005237267s
	I0904 20:56:11.470833  389648 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.648459396s
	I0904 20:56:13.324669  389648 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.502233446s
	I0904 20:56:13.335088  389648 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0904 20:56:13.344120  389648 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0904 20:56:13.351749  389648 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0904 20:56:13.351978  389648 kubeadm.go:310] [mark-control-plane] Marking the node addons-049370 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0904 20:56:13.359295  389648 kubeadm.go:310] [bootstrap-token] Using token: 2wn3c0.ojgacqfx8o0hgs3z
	I0904 20:56:13.360520  389648 out.go:252]   - Configuring RBAC rules ...
	I0904 20:56:13.360674  389648 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0904 20:56:13.363353  389648 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0904 20:56:13.367752  389648 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0904 20:56:13.369941  389648 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0904 20:56:13.372028  389648 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0904 20:56:13.375032  389648 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0904 20:56:13.729580  389648 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0904 20:56:14.144230  389648 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0904 20:56:14.730781  389648 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0904 20:56:14.731685  389648 kubeadm.go:310] 
	I0904 20:56:14.731789  389648 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0904 20:56:14.731799  389648 kubeadm.go:310] 
	I0904 20:56:14.731900  389648 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0904 20:56:14.731934  389648 kubeadm.go:310] 
	I0904 20:56:14.731997  389648 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0904 20:56:14.732055  389648 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0904 20:56:14.732151  389648 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0904 20:56:14.732161  389648 kubeadm.go:310] 
	I0904 20:56:14.732233  389648 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0904 20:56:14.732242  389648 kubeadm.go:310] 
	I0904 20:56:14.732312  389648 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0904 20:56:14.732321  389648 kubeadm.go:310] 
	I0904 20:56:14.732378  389648 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0904 20:56:14.732445  389648 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0904 20:56:14.732534  389648 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0904 20:56:14.732544  389648 kubeadm.go:310] 
	I0904 20:56:14.732650  389648 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0904 20:56:14.732787  389648 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0904 20:56:14.732801  389648 kubeadm.go:310] 
	I0904 20:56:14.732903  389648 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2wn3c0.ojgacqfx8o0hgs3z \
	I0904 20:56:14.733021  389648 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:27309edecc409b6b2cac280161d86abdcb5cc9f2633f123f165aab1e20ce61c4 \
	I0904 20:56:14.733052  389648 kubeadm.go:310] 	--control-plane 
	I0904 20:56:14.733062  389648 kubeadm.go:310] 
	I0904 20:56:14.733161  389648 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0904 20:56:14.733169  389648 kubeadm.go:310] 
	I0904 20:56:14.733281  389648 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2wn3c0.ojgacqfx8o0hgs3z \
	I0904 20:56:14.733409  389648 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:27309edecc409b6b2cac280161d86abdcb5cc9f2633f123f165aab1e20ce61c4 
	I0904 20:56:14.735269  389648 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0904 20:56:14.735560  389648 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
	I0904 20:56:14.735715  389648 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0904 20:56:14.735757  389648 cni.go:84] Creating CNI manager for ""
	I0904 20:56:14.735771  389648 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 20:56:14.737265  389648 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0904 20:56:14.738354  389648 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0904 20:56:14.741948  389648 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0904 20:56:14.741966  389648 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0904 20:56:14.758407  389648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0904 20:56:14.949539  389648 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0904 20:56:14.949629  389648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:14.949645  389648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-049370 minikube.k8s.io/updated_at=2025_09_04T20_56_14_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=d82f852837f628b3930700b19196c39855cd258a minikube.k8s.io/name=addons-049370 minikube.k8s.io/primary=true
	I0904 20:56:14.957282  389648 ops.go:34] apiserver oom_adj: -16
	I0904 20:56:15.056202  389648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:15.556268  389648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:16.056217  389648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:16.557001  389648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:17.057153  389648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:17.556713  389648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:18.057056  389648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:18.556307  389648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:19.057162  389648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:19.120633  389648 kubeadm.go:1105] duration metric: took 4.171070637s to wait for elevateKubeSystemPrivileges
	I0904 20:56:19.120676  389648 kubeadm.go:394] duration metric: took 14.683591745s to StartCluster
	I0904 20:56:19.120715  389648 settings.go:142] acquiring lock: {Name:mke06342cfb6705345a5c7324f763dc44aea4569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:19.120870  389648 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21490-384635/kubeconfig
	I0904 20:56:19.121542  389648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/kubeconfig: {Name:mk6b311573f3fade9cba8f894d5c9f5ca76d1e25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:19.121797  389648 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0904 20:56:19.121845  389648 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 20:56:19.121892  389648 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0904 20:56:19.122079  389648 config.go:182] Loaded profile config "addons-049370": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 20:56:19.122530  389648 addons.go:69] Setting inspektor-gadget=true in profile "addons-049370"
	I0904 20:56:19.122543  389648 addons.go:69] Setting yakd=true in profile "addons-049370"
	I0904 20:56:19.122568  389648 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-049370"
	I0904 20:56:19.122574  389648 addons.go:69] Setting registry-creds=true in profile "addons-049370"
	I0904 20:56:19.122584  389648 addons.go:238] Setting addon yakd=true in "addons-049370"
	I0904 20:56:19.122588  389648 addons.go:69] Setting metrics-server=true in profile "addons-049370"
	I0904 20:56:19.122595  389648 addons.go:238] Setting addon registry-creds=true in "addons-049370"
	I0904 20:56:19.122597  389648 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-049370"
	I0904 20:56:19.122606  389648 addons.go:238] Setting addon metrics-server=true in "addons-049370"
	I0904 20:56:19.122631  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.122635  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.122637  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.122574  389648 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-049370"
	I0904 20:56:19.122615  389648 addons.go:69] Setting registry=true in profile "addons-049370"
	I0904 20:56:19.122683  389648 addons.go:69] Setting cloud-spanner=true in profile "addons-049370"
	I0904 20:56:19.122665  389648 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-049370"
	I0904 20:56:19.122703  389648 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-049370"
	I0904 20:56:19.122729  389648 addons.go:238] Setting addon registry=true in "addons-049370"
	I0904 20:56:19.122730  389648 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-049370"
	I0904 20:56:19.122740  389648 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-049370"
	I0904 20:56:19.122757  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.122781  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.123155  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.123184  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.123217  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.122637  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.123265  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.123272  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.123605  389648 addons.go:69] Setting storage-provisioner=true in profile "addons-049370"
	I0904 20:56:19.123629  389648 addons.go:238] Setting addon storage-provisioner=true in "addons-049370"
	I0904 20:56:19.123657  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.123677  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.123784  389648 addons.go:69] Setting volumesnapshots=true in profile "addons-049370"
	I0904 20:56:19.123801  389648 addons.go:238] Setting addon volumesnapshots=true in "addons-049370"
	I0904 20:56:19.123826  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.124143  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.122696  389648 addons.go:238] Setting addon cloud-spanner=true in "addons-049370"
	I0904 20:56:19.124582  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.124795  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.123219  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.125090  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.125204  389648 addons.go:69] Setting gcp-auth=true in profile "addons-049370"
	I0904 20:56:19.126262  389648 mustload.go:65] Loading cluster: addons-049370
	I0904 20:56:19.126543  389648 config.go:182] Loaded profile config "addons-049370": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 20:56:19.126863  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.122674  389648 addons.go:69] Setting default-storageclass=true in profile "addons-049370"
	I0904 20:56:19.130409  389648 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-049370"
	I0904 20:56:19.130766  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.122663  389648 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-049370"
	I0904 20:56:19.132380  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.132897  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.160874  389648 out.go:179] * Verifying Kubernetes components...
	I0904 20:56:19.122562  389648 addons.go:238] Setting addon inspektor-gadget=true in "addons-049370"
	I0904 20:56:19.161079  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.161765  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.163225  389648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 20:56:19.125353  389648 addons.go:69] Setting ingress=true in profile "addons-049370"
	I0904 20:56:19.164437  389648 addons.go:238] Setting addon ingress=true in "addons-049370"
	I0904 20:56:19.164483  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.164897  389648 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0904 20:56:19.166432  389648 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0904 20:56:19.165219  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.167756  389648 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0904 20:56:19.168483  389648 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0904 20:56:19.168508  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0904 20:56:19.168567  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.168620  389648 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0904 20:56:19.168633  389648 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0904 20:56:19.168672  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.170255  389648 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0904 20:56:19.170500  389648 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-049370"
	I0904 20:56:19.170541  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.165299  389648 addons.go:69] Setting volcano=true in profile "addons-049370"
	I0904 20:56:19.170598  389648 addons.go:238] Setting addon volcano=true in "addons-049370"
	I0904 20:56:19.170662  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.125367  389648 addons.go:69] Setting ingress-dns=true in profile "addons-049370"
	I0904 20:56:19.170703  389648 addons.go:238] Setting addon ingress-dns=true in "addons-049370"
	I0904 20:56:19.170745  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.171072  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.171559  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.171696  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.173941  389648 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0904 20:56:19.174145  389648 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0904 20:56:19.175294  389648 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0904 20:56:19.175317  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0904 20:56:19.175370  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.176359  389648 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0904 20:56:19.177473  389648 out.go:179]   - Using image docker.io/registry:3.0.0
	I0904 20:56:19.178644  389648 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0904 20:56:19.179797  389648 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0904 20:56:19.184781  389648 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0904 20:56:19.185493  389648 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0904 20:56:19.185566  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0904 20:56:19.185663  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.193293  389648 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0904 20:56:19.193281  389648 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0904 20:56:19.193365  389648 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0904 20:56:19.193325  389648 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 20:56:19.194473  389648 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0904 20:56:19.194494  389648 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0904 20:56:19.194572  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.195252  389648 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0904 20:56:19.195290  389648 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0904 20:56:19.195358  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.195374  389648 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 20:56:19.195397  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0904 20:56:19.195449  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.195584  389648 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0904 20:56:19.196553  389648 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0904 20:56:19.196568  389648 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0904 20:56:19.196639  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	W0904 20:56:19.205941  389648 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0904 20:56:19.216885  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.234344  389648 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.40
	I0904 20:56:19.234475  389648 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.0
	I0904 20:56:19.236096  389648 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0904 20:56:19.236117  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0904 20:56:19.236181  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.236410  389648 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0904 20:56:19.236424  389648 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0904 20:56:19.236486  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.238985  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.249090  389648 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0904 20:56:19.250463  389648 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0904 20:56:19.250482  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0904 20:56:19.250581  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.251306  389648 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0904 20:56:19.252687  389648 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0904 20:56:19.252707  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0904 20:56:19.252773  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.253251  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.253990  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.275865  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.276415  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.286380  389648 addons.go:238] Setting addon default-storageclass=true in "addons-049370"
	I0904 20:56:19.286427  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.286470  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.286751  389648 out.go:179]   - Using image docker.io/busybox:stable
	I0904 20:56:19.286808  389648 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0904 20:56:19.286911  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.289833  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.290415  389648 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0904 20:56:19.290520  389648 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0904 20:56:19.290861  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.291698  389648 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0904 20:56:19.291722  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0904 20:56:19.291783  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.294160  389648 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0904 20:56:19.298343  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.298945  389648 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0904 20:56:19.298968  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0904 20:56:19.299026  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.302411  389648 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0904 20:56:19.306360  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.309871  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.312171  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.319635  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.321085  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.321297  389648 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0904 20:56:19.321320  389648 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0904 20:56:19.321378  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.337083  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	W0904 20:56:19.349311  389648 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0904 20:56:19.349347  389648 retry.go:31] will retry after 269.872023ms: ssh: handshake failed: EOF
	W0904 20:56:19.349375  389648 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0904 20:56:19.349384  389648 retry.go:31] will retry after 359.531202ms: ssh: handshake failed: EOF
	I0904 20:56:19.548037  389648 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 20:56:19.652723  389648 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0904 20:56:19.652769  389648 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0904 20:56:19.663141  389648 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0904 20:56:19.663174  389648 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0904 20:56:19.746376  389648 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0904 20:56:19.746406  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0904 20:56:19.746783  389648 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0904 20:56:19.746802  389648 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0904 20:56:19.751531  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0904 20:56:19.756963  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 20:56:19.767028  389648 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0904 20:56:19.767122  389648 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0904 20:56:19.861846  389648 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0904 20:56:19.861944  389648 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0904 20:56:19.946528  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0904 20:56:19.947053  389648 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:19.947078  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0904 20:56:19.955391  389648 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0904 20:56:19.955469  389648 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0904 20:56:19.959896  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0904 20:56:19.964131  389648 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0904 20:56:19.964187  389648 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0904 20:56:19.966688  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0904 20:56:19.967554  389648 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0904 20:56:19.967597  389648 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0904 20:56:19.969099  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0904 20:56:19.970420  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0904 20:56:20.047283  389648 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0904 20:56:20.047381  389648 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0904 20:56:20.054133  389648 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0904 20:56:20.054222  389648 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0904 20:56:20.255401  389648 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0904 20:56:20.255496  389648 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0904 20:56:20.266289  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:20.268923  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0904 20:56:20.345501  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0904 20:56:20.348902  389648 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0904 20:56:20.348951  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0904 20:56:20.349097  389648 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0904 20:56:20.349114  389648 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0904 20:56:20.448730  389648 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0904 20:56:20.448833  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0904 20:56:20.564135  389648 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0904 20:56:20.564226  389648 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0904 20:56:20.751518  389648 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.44906046s)
	I0904 20:56:20.751627  389648 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0904 20:56:20.751853  389648 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.203672375s)
	I0904 20:56:20.754155  389648 node_ready.go:35] waiting up to 6m0s for node "addons-049370" to be "Ready" ...
	I0904 20:56:20.761736  389648 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0904 20:56:20.761796  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0904 20:56:20.846606  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0904 20:56:20.856051  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0904 20:56:20.866698  389648 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0904 20:56:20.866814  389648 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0904 20:56:21.145379  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0904 20:56:21.350361  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.598747473s)
	I0904 20:56:21.367474  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0904 20:56:21.448274  389648 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0904 20:56:21.448385  389648 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0904 20:56:21.655407  389648 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-049370" context rescaled to 1 replicas
	I0904 20:56:21.846590  389648 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0904 20:56:21.846680  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0904 20:56:22.161088  389648 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0904 20:56:22.161184  389648 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0904 20:56:22.558322  389648 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0904 20:56:22.558416  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0904 20:56:22.757443  389648 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0904 20:56:22.757535  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	W0904 20:56:22.854862  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:23.062691  389648 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0904 20:56:23.062785  389648 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0904 20:56:23.546535  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0904 20:56:23.864177  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.107174607s)
	I0904 20:56:24.150368  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.203792982s)
	I0904 20:56:24.150762  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.190798522s)
	I0904 20:56:24.150841  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.184107525s)
	I0904 20:56:24.150883  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.181727834s)
	I0904 20:56:24.150921  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.180454067s)
	I0904 20:56:24.153577  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.887255095s)
	W0904 20:56:24.153617  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:24.153648  389648 retry.go:31] will retry after 274.263741ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0904 20:56:24.255664  389648 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0904 20:56:24.428253  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:25.145429  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.799802302s)
	I0904 20:56:25.145480  389648 addons.go:479] Verifying addon ingress=true in "addons-049370"
	I0904 20:56:25.145982  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.299284675s)
	I0904 20:56:25.146015  389648 addons.go:479] Verifying addon registry=true in "addons-049370"
	I0904 20:56:25.146076  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.289922439s)
	I0904 20:56:25.146132  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.876927435s)
	I0904 20:56:25.146167  389648 addons.go:479] Verifying addon metrics-server=true in "addons-049370"
	I0904 20:56:25.146241  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.000773212s)
	I0904 20:56:25.147285  389648 out.go:179] * Verifying registry addon...
	I0904 20:56:25.147335  389648 out.go:179] * Verifying ingress addon...
	I0904 20:56:25.148139  389648 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-049370 service yakd-dashboard -n yakd-dashboard
	
	I0904 20:56:25.149773  389648 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0904 20:56:25.149773  389648 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0904 20:56:25.162307  389648 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0904 20:56:25.162382  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:25.162833  389648 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0904 20:56:25.162892  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:25.256839  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:25.653521  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:25.653811  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:26.153386  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:26.153683  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:26.355221  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.987641855s)
	W0904 20:56:26.355277  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0904 20:56:26.355305  389648 retry.go:31] will retry after 260.638152ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0904 20:56:26.355424  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.808790402s)
	I0904 20:56:26.355454  389648 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-049370"
	I0904 20:56:26.356999  389648 out.go:179] * Verifying csi-hostpath-driver addon...
	I0904 20:56:26.359335  389648 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0904 20:56:26.364572  389648 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0904 20:56:26.364592  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:26.415311  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.987009875s)
	W0904 20:56:26.415355  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:26.415375  389648 retry.go:31] will retry after 295.761583ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:26.616984  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0904 20:56:26.653507  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:26.653558  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:26.711551  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:26.849469  389648 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0904 20:56:26.849544  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:26.862656  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:26.874207  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:26.978097  389648 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0904 20:56:26.994974  389648 addons.go:238] Setting addon gcp-auth=true in "addons-049370"
	I0904 20:56:26.995024  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:26.995376  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:27.012374  389648 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0904 20:56:27.012428  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:27.028863  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:27.152149  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:27.152264  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:27.362370  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:27.653212  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:27.653402  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:27.758106  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:27.863000  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:28.153378  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:28.153490  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:28.363340  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:28.653066  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:28.653239  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:28.861982  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:29.092107  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.475068234s)
	I0904 20:56:29.092190  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.380598109s)
	I0904 20:56:29.092219  389648 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.079820201s)
	W0904 20:56:29.092237  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:29.092263  389648 retry.go:31] will retry after 502.484223ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:29.093894  389648 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0904 20:56:29.095483  389648 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0904 20:56:29.096510  389648 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0904 20:56:29.096529  389648 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0904 20:56:29.112631  389648 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0904 20:56:29.112663  389648 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0904 20:56:29.128018  389648 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0904 20:56:29.128036  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0904 20:56:29.143020  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0904 20:56:29.153882  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:29.154123  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:29.362692  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:29.454170  389648 addons.go:479] Verifying addon gcp-auth=true in "addons-049370"
	I0904 20:56:29.455515  389648 out.go:179] * Verifying gcp-auth addon...
	I0904 20:56:29.457417  389648 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0904 20:56:29.459571  389648 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0904 20:56:29.459590  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:29.595708  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:29.652683  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:29.652827  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:29.862029  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:29.960159  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 20:56:30.114851  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:30.114881  389648 retry.go:31] will retry after 693.179023ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:30.152713  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:30.152863  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:30.257051  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:30.362609  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:30.460179  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:30.652858  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:30.652980  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:30.808239  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:30.863242  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:30.961106  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:31.154171  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:31.154231  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:31.322382  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:31.322416  389648 retry.go:31] will retry after 1.197657s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:31.362659  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:31.459971  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:31.652462  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:31.652562  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:31.862315  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:31.960600  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:32.153504  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:32.153604  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:32.362298  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:32.460616  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:32.520713  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:32.652511  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:32.652595  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:32.760458  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:32.863634  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:32.959731  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 20:56:33.040841  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:33.040881  389648 retry.go:31] will retry after 2.457515415s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:33.152726  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:33.152743  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:33.362502  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:33.460284  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:33.652934  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:33.653038  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:33.862246  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:33.960818  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:34.153166  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:34.153280  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:34.362100  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:34.460789  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:34.653810  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:34.653810  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:34.861972  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:34.960530  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:35.153325  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:35.153406  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:35.257683  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:35.362424  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:35.460858  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:35.499007  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:35.653242  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:35.653299  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:35.861724  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:35.959645  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 20:56:36.016874  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:36.016905  389648 retry.go:31] will retry after 3.533514487s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:36.152675  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:36.152869  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:36.362591  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:36.459815  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:36.652244  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:36.652298  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:36.862251  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:36.960712  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:37.153481  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:37.153520  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:37.362437  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:37.460789  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:37.652357  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:37.652379  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:37.756527  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:37.862037  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:37.960447  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:38.153502  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:38.153539  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:38.362816  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:38.460210  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:38.652903  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:38.653135  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:38.862007  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:38.960650  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:39.153578  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:39.153774  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:39.361972  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:39.460461  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:39.551574  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:39.653495  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:39.653650  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:39.757372  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:39.862832  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:39.960361  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 20:56:40.069853  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:40.069886  389648 retry.go:31] will retry after 3.560952844s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:40.153097  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:40.153206  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:40.363022  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:40.460438  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:40.653028  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:40.653073  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:40.861984  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:40.960713  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:41.153196  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:41.153351  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:41.361826  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:41.460267  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:41.652784  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:41.652802  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:41.862344  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:41.960834  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:42.152737  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:42.152979  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:42.257147  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:42.362587  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:42.459962  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:42.652593  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:42.652591  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:42.862875  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:42.960672  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:43.153594  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:43.153640  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:43.362502  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:43.459930  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:43.631059  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:43.652889  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:43.653087  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:43.863337  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:43.960266  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 20:56:44.144205  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:44.144237  389648 retry.go:31] will retry after 6.676490417s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:44.152882  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:44.152942  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:44.257493  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:44.362019  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:44.460489  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:44.652917  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:44.653070  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:44.863130  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:44.960584  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:45.153391  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:45.153527  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:45.362608  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:45.460071  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:45.652849  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:45.652915  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:45.862777  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:45.960533  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:46.153667  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:46.153804  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:46.362632  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:46.459907  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:46.652296  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:46.652477  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:46.756788  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:46.862351  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:46.960886  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:47.152391  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:47.152568  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:47.362190  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:47.460736  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:47.653232  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:47.653276  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:47.862018  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:47.960474  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:48.153153  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:48.153187  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:48.361882  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:48.460168  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:48.652729  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:48.652864  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:48.757107  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:48.862689  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:48.960180  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:49.152873  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:49.153024  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:49.362233  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:49.460721  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:49.653148  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:49.653303  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:49.861892  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:49.960294  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:50.153077  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:50.153232  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:50.362407  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:50.460915  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:50.652501  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:50.652591  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:50.821192  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:50.862867  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:50.960502  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:51.153049  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:51.153160  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:51.256873  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	W0904 20:56:51.328889  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:51.328930  389648 retry.go:31] will retry after 8.058478981s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:51.362542  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:51.459958  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:51.652490  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:51.652667  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:51.862401  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:51.960987  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:52.152519  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:52.152675  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:52.362366  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:52.460825  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:52.652376  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:52.652430  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:52.862135  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:52.960933  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:53.152709  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:53.152720  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:53.257375  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:53.361785  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:53.460337  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:53.652733  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:53.653014  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:53.862742  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:53.960136  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:54.152726  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:54.152730  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:54.362518  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:54.461080  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:54.652473  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:54.652664  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:54.862347  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:54.961384  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:55.153124  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:55.153270  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:55.257640  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:55.362463  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:55.460990  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:55.652354  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:55.652574  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:55.862388  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:55.960122  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:56.152694  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:56.152920  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:56.361858  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:56.460337  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:56.653103  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:56.653185  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:56.862426  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:56.960988  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:57.152323  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:57.152431  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:57.362264  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:57.460771  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:57.653160  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:57.653308  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:57.756540  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:57.861955  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:57.960493  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:58.153029  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:58.153223  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:58.362583  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:58.460924  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:58.652481  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:58.652538  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:58.862381  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:58.960880  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:59.152567  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:59.152726  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:59.362851  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:59.387964  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:59.460401  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:59.652881  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:59.653048  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:59.757426  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:59.862341  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0904 20:56:59.907626  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:59.907661  389648 retry.go:31] will retry after 19.126227015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:59.960065  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:00.152732  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:00.152876  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:00.363049  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:00.460514  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:00.653154  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:00.653270  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:00.862296  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:00.961337  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:01.152894  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:01.153019  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:01.362117  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:01.460734  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:01.653271  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:01.653460  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:01.862509  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:01.960047  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:02.152837  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:02.152896  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:57:02.257044  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:57:02.362872  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:02.460517  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:02.653172  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:02.653366  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:02.862373  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:02.961084  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:03.152784  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:03.152910  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:03.362694  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:03.459964  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:03.652371  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:03.652557  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:03.757648  389648 node_ready.go:49] node "addons-049370" is "Ready"
	I0904 20:57:03.757687  389648 node_ready.go:38] duration metric: took 43.003447045s for node "addons-049370" to be "Ready" ...
	I0904 20:57:03.757707  389648 api_server.go:52] waiting for apiserver process to appear ...
	I0904 20:57:03.757770  389648 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 20:57:03.775055  389648 api_server.go:72] duration metric: took 44.653167184s to wait for apiserver process to appear ...
	I0904 20:57:03.775146  389648 api_server.go:88] waiting for apiserver healthz status ...
	I0904 20:57:03.775175  389648 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 20:57:03.847773  389648 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0904 20:57:03.848894  389648 api_server.go:141] control plane version: v1.34.0
	I0904 20:57:03.848928  389648 api_server.go:131] duration metric: took 73.768685ms to wait for apiserver health ...
	I0904 20:57:03.848941  389648 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 20:57:03.853285  389648 system_pods.go:59] 20 kube-system pods found
	I0904 20:57:03.853319  389648 system_pods.go:61] "amd-gpu-device-plugin-ddj6n" [dd553d62-6014-4d9a-b40a-e960cd942c31] Pending
	I0904 20:57:03.853326  389648 system_pods.go:61] "coredns-66bc5c9577-m8z9t" [461c8bca-3775-4a4e-a6ea-46896415415c] Pending
	I0904 20:57:03.853331  389648 system_pods.go:61] "csi-hostpath-attacher-0" [a2237a13-ee99-438b-8cf6-3c88c95e7d6b] Pending
	I0904 20:57:03.853336  389648 system_pods.go:61] "csi-hostpath-resizer-0" [42e75435-91d0-4b2b-a7f7-5dc7aed8258e] Pending
	I0904 20:57:03.853341  389648 system_pods.go:61] "csi-hostpathplugin-98s7l" [0f04c863-d08f-4ce8-8aba-f8431c710bad] Pending
	I0904 20:57:03.853346  389648 system_pods.go:61] "etcd-addons-049370" [a5b5bef6-6ee7-457c-8ed6-7fcd25717290] Running
	I0904 20:57:03.853352  389648 system_pods.go:61] "kindnet-7bfb9" [255cca7e-1239-404f-a063-333e80f7b32c] Running
	I0904 20:57:03.853358  389648 system_pods.go:61] "kube-apiserver-addons-049370" [1bf36e27-c0a6-4950-9554-56e2abea00d9] Running
	I0904 20:57:03.853366  389648 system_pods.go:61] "kube-controller-manager-addons-049370" [10f65f3d-208b-4468-88b3-7bd1f31a99ef] Running
	I0904 20:57:03.853372  389648 system_pods.go:61] "kube-ingress-dns-minikube" [1f14a514-2ff6-4572-97f7-91a8ef6a1e64] Pending
	I0904 20:57:03.853380  389648 system_pods.go:61] "kube-proxy-k5lnm" [f9894455-f642-473d-95bc-6d2aeccae2cf] Running
	I0904 20:57:03.853389  389648 system_pods.go:61] "kube-scheduler-addons-049370" [a95d0b5b-fd44-47f5-bf4d-55c2038da3de] Running
	I0904 20:57:03.853403  389648 system_pods.go:61] "metrics-server-85b7d694d7-kgh4z" [85361692-2630-41a0-bd24-f432d8ed4a23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 20:57:03.853412  389648 system_pods.go:61] "nvidia-device-plugin-daemonset-kqhfs" [d73bdd1f-d0db-435b-89c5-fc48b0ac0590] Pending
	I0904 20:57:03.853423  389648 system_pods.go:61] "registry-66898fdd98-2lbsx" [7bb317fd-9e98-41e6-a9bd-15826d701411] Pending
	I0904 20:57:03.853431  389648 system_pods.go:61] "registry-creds-764b6fb674-zkcq2" [ba1e4f2c-5cae-4b0d-be29-bb07606db48c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0904 20:57:03.853439  389648 system_pods.go:61] "registry-proxy-84zdv" [83f3d976-f7e6-44f0-a75e-408cd4584cfd] Pending
	I0904 20:57:03.853445  389648 system_pods.go:61] "snapshot-controller-7d9fbc56b8-5d9jh" [c0db4104-0944-4863-b900-e7db691bf3f3] Pending
	I0904 20:57:03.853455  389648 system_pods.go:61] "snapshot-controller-7d9fbc56b8-mgxvk" [afd9e3b4-4dc1-41fb-ae65-96ec88d7c7b8] Pending
	I0904 20:57:03.853460  389648 system_pods.go:61] "storage-provisioner" [8fd9befb-6934-4cdc-bb01-07e92f780dfa] Pending
	I0904 20:57:03.853471  389648 system_pods.go:74] duration metric: took 4.521878ms to wait for pod list to return data ...
	I0904 20:57:03.853485  389648 default_sa.go:34] waiting for default service account to be created ...
	I0904 20:57:03.855589  389648 default_sa.go:45] found service account: "default"
	I0904 20:57:03.855645  389648 default_sa.go:55] duration metric: took 2.148457ms for default service account to be created ...
	I0904 20:57:03.855669  389648 system_pods.go:116] waiting for k8s-apps to be running ...
	I0904 20:57:03.864140  389648 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0904 20:57:03.864166  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:03.865511  389648 system_pods.go:86] 20 kube-system pods found
	I0904 20:57:03.865543  389648 system_pods.go:89] "amd-gpu-device-plugin-ddj6n" [dd553d62-6014-4d9a-b40a-e960cd942c31] Pending
	I0904 20:57:03.865552  389648 system_pods.go:89] "coredns-66bc5c9577-m8z9t" [461c8bca-3775-4a4e-a6ea-46896415415c] Pending
	I0904 20:57:03.865558  389648 system_pods.go:89] "csi-hostpath-attacher-0" [a2237a13-ee99-438b-8cf6-3c88c95e7d6b] Pending
	I0904 20:57:03.865563  389648 system_pods.go:89] "csi-hostpath-resizer-0" [42e75435-91d0-4b2b-a7f7-5dc7aed8258e] Pending
	I0904 20:57:03.865568  389648 system_pods.go:89] "csi-hostpathplugin-98s7l" [0f04c863-d08f-4ce8-8aba-f8431c710bad] Pending
	I0904 20:57:03.865574  389648 system_pods.go:89] "etcd-addons-049370" [a5b5bef6-6ee7-457c-8ed6-7fcd25717290] Running
	I0904 20:57:03.865580  389648 system_pods.go:89] "kindnet-7bfb9" [255cca7e-1239-404f-a063-333e80f7b32c] Running
	I0904 20:57:03.865586  389648 system_pods.go:89] "kube-apiserver-addons-049370" [1bf36e27-c0a6-4950-9554-56e2abea00d9] Running
	I0904 20:57:03.865591  389648 system_pods.go:89] "kube-controller-manager-addons-049370" [10f65f3d-208b-4468-88b3-7bd1f31a99ef] Running
	I0904 20:57:03.865595  389648 system_pods.go:89] "kube-ingress-dns-minikube" [1f14a514-2ff6-4572-97f7-91a8ef6a1e64] Pending
	I0904 20:57:03.865599  389648 system_pods.go:89] "kube-proxy-k5lnm" [f9894455-f642-473d-95bc-6d2aeccae2cf] Running
	I0904 20:57:03.865602  389648 system_pods.go:89] "kube-scheduler-addons-049370" [a95d0b5b-fd44-47f5-bf4d-55c2038da3de] Running
	I0904 20:57:03.865611  389648 system_pods.go:89] "metrics-server-85b7d694d7-kgh4z" [85361692-2630-41a0-bd24-f432d8ed4a23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 20:57:03.865621  389648 system_pods.go:89] "nvidia-device-plugin-daemonset-kqhfs" [d73bdd1f-d0db-435b-89c5-fc48b0ac0590] Pending
	I0904 20:57:03.865627  389648 system_pods.go:89] "registry-66898fdd98-2lbsx" [7bb317fd-9e98-41e6-a9bd-15826d701411] Pending
	I0904 20:57:03.865631  389648 system_pods.go:89] "registry-creds-764b6fb674-zkcq2" [ba1e4f2c-5cae-4b0d-be29-bb07606db48c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0904 20:57:03.865635  389648 system_pods.go:89] "registry-proxy-84zdv" [83f3d976-f7e6-44f0-a75e-408cd4584cfd] Pending
	I0904 20:57:03.865639  389648 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5d9jh" [c0db4104-0944-4863-b900-e7db691bf3f3] Pending
	I0904 20:57:03.865645  389648 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mgxvk" [afd9e3b4-4dc1-41fb-ae65-96ec88d7c7b8] Pending
	I0904 20:57:03.865650  389648 system_pods.go:89] "storage-provisioner" [8fd9befb-6934-4cdc-bb01-07e92f780dfa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 20:57:03.865666  389648 retry.go:31] will retry after 266.681541ms: missing components: kube-dns
	I0904 20:57:03.963849  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:04.148992  389648 system_pods.go:86] 20 kube-system pods found
	I0904 20:57:04.149036  389648 system_pods.go:89] "amd-gpu-device-plugin-ddj6n" [dd553d62-6014-4d9a-b40a-e960cd942c31] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0904 20:57:04.149049  389648 system_pods.go:89] "coredns-66bc5c9577-m8z9t" [461c8bca-3775-4a4e-a6ea-46896415415c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 20:57:04.149060  389648 system_pods.go:89] "csi-hostpath-attacher-0" [a2237a13-ee99-438b-8cf6-3c88c95e7d6b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0904 20:57:04.149065  389648 system_pods.go:89] "csi-hostpath-resizer-0" [42e75435-91d0-4b2b-a7f7-5dc7aed8258e] Pending
	I0904 20:57:04.149077  389648 system_pods.go:89] "csi-hostpathplugin-98s7l" [0f04c863-d08f-4ce8-8aba-f8431c710bad] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0904 20:57:04.149083  389648 system_pods.go:89] "etcd-addons-049370" [a5b5bef6-6ee7-457c-8ed6-7fcd25717290] Running
	I0904 20:57:04.149090  389648 system_pods.go:89] "kindnet-7bfb9" [255cca7e-1239-404f-a063-333e80f7b32c] Running
	I0904 20:57:04.149095  389648 system_pods.go:89] "kube-apiserver-addons-049370" [1bf36e27-c0a6-4950-9554-56e2abea00d9] Running
	I0904 20:57:04.149101  389648 system_pods.go:89] "kube-controller-manager-addons-049370" [10f65f3d-208b-4468-88b3-7bd1f31a99ef] Running
	I0904 20:57:04.149158  389648 system_pods.go:89] "kube-ingress-dns-minikube" [1f14a514-2ff6-4572-97f7-91a8ef6a1e64] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0904 20:57:04.149164  389648 system_pods.go:89] "kube-proxy-k5lnm" [f9894455-f642-473d-95bc-6d2aeccae2cf] Running
	I0904 20:57:04.149171  389648 system_pods.go:89] "kube-scheduler-addons-049370" [a95d0b5b-fd44-47f5-bf4d-55c2038da3de] Running
	I0904 20:57:04.149179  389648 system_pods.go:89] "metrics-server-85b7d694d7-kgh4z" [85361692-2630-41a0-bd24-f432d8ed4a23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 20:57:04.149188  389648 system_pods.go:89] "nvidia-device-plugin-daemonset-kqhfs" [d73bdd1f-d0db-435b-89c5-fc48b0ac0590] Pending
	I0904 20:57:04.149196  389648 system_pods.go:89] "registry-66898fdd98-2lbsx" [7bb317fd-9e98-41e6-a9bd-15826d701411] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0904 20:57:04.149207  389648 system_pods.go:89] "registry-creds-764b6fb674-zkcq2" [ba1e4f2c-5cae-4b0d-be29-bb07606db48c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0904 20:57:04.149216  389648 system_pods.go:89] "registry-proxy-84zdv" [83f3d976-f7e6-44f0-a75e-408cd4584cfd] Pending
	I0904 20:57:04.149226  389648 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5d9jh" [c0db4104-0944-4863-b900-e7db691bf3f3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 20:57:04.149236  389648 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mgxvk" [afd9e3b4-4dc1-41fb-ae65-96ec88d7c7b8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 20:57:04.149249  389648 system_pods.go:89] "storage-provisioner" [8fd9befb-6934-4cdc-bb01-07e92f780dfa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 20:57:04.149269  389648 retry.go:31] will retry after 384.617911ms: missing components: kube-dns
	I0904 20:57:04.154716  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:04.154839  389648 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0904 20:57:04.154853  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:04.366569  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:04.466268  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:04.567997  389648 system_pods.go:86] 20 kube-system pods found
	I0904 20:57:04.568030  389648 system_pods.go:89] "amd-gpu-device-plugin-ddj6n" [dd553d62-6014-4d9a-b40a-e960cd942c31] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0904 20:57:04.568038  389648 system_pods.go:89] "coredns-66bc5c9577-m8z9t" [461c8bca-3775-4a4e-a6ea-46896415415c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 20:57:04.568045  389648 system_pods.go:89] "csi-hostpath-attacher-0" [a2237a13-ee99-438b-8cf6-3c88c95e7d6b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0904 20:57:04.568050  389648 system_pods.go:89] "csi-hostpath-resizer-0" [42e75435-91d0-4b2b-a7f7-5dc7aed8258e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0904 20:57:04.568057  389648 system_pods.go:89] "csi-hostpathplugin-98s7l" [0f04c863-d08f-4ce8-8aba-f8431c710bad] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0904 20:57:04.568063  389648 system_pods.go:89] "etcd-addons-049370" [a5b5bef6-6ee7-457c-8ed6-7fcd25717290] Running
	I0904 20:57:04.568067  389648 system_pods.go:89] "kindnet-7bfb9" [255cca7e-1239-404f-a063-333e80f7b32c] Running
	I0904 20:57:04.568071  389648 system_pods.go:89] "kube-apiserver-addons-049370" [1bf36e27-c0a6-4950-9554-56e2abea00d9] Running
	I0904 20:57:04.568074  389648 system_pods.go:89] "kube-controller-manager-addons-049370" [10f65f3d-208b-4468-88b3-7bd1f31a99ef] Running
	I0904 20:57:04.568081  389648 system_pods.go:89] "kube-ingress-dns-minikube" [1f14a514-2ff6-4572-97f7-91a8ef6a1e64] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0904 20:57:04.568086  389648 system_pods.go:89] "kube-proxy-k5lnm" [f9894455-f642-473d-95bc-6d2aeccae2cf] Running
	I0904 20:57:04.568091  389648 system_pods.go:89] "kube-scheduler-addons-049370" [a95d0b5b-fd44-47f5-bf4d-55c2038da3de] Running
	I0904 20:57:04.568096  389648 system_pods.go:89] "metrics-server-85b7d694d7-kgh4z" [85361692-2630-41a0-bd24-f432d8ed4a23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 20:57:04.568110  389648 system_pods.go:89] "nvidia-device-plugin-daemonset-kqhfs" [d73bdd1f-d0db-435b-89c5-fc48b0ac0590] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0904 20:57:04.568115  389648 system_pods.go:89] "registry-66898fdd98-2lbsx" [7bb317fd-9e98-41e6-a9bd-15826d701411] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0904 20:57:04.568122  389648 system_pods.go:89] "registry-creds-764b6fb674-zkcq2" [ba1e4f2c-5cae-4b0d-be29-bb07606db48c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0904 20:57:04.568127  389648 system_pods.go:89] "registry-proxy-84zdv" [83f3d976-f7e6-44f0-a75e-408cd4584cfd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0904 20:57:04.568135  389648 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5d9jh" [c0db4104-0944-4863-b900-e7db691bf3f3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 20:57:04.568140  389648 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mgxvk" [afd9e3b4-4dc1-41fb-ae65-96ec88d7c7b8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 20:57:04.568147  389648 system_pods.go:89] "storage-provisioner" [8fd9befb-6934-4cdc-bb01-07e92f780dfa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 20:57:04.568163  389648 retry.go:31] will retry after 481.666443ms: missing components: kube-dns
	I0904 20:57:04.667086  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:04.667538  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:04.862644  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:04.959928  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:05.053770  389648 system_pods.go:86] 20 kube-system pods found
	I0904 20:57:05.053813  389648 system_pods.go:89] "amd-gpu-device-plugin-ddj6n" [dd553d62-6014-4d9a-b40a-e960cd942c31] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0904 20:57:05.053821  389648 system_pods.go:89] "coredns-66bc5c9577-m8z9t" [461c8bca-3775-4a4e-a6ea-46896415415c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 20:57:05.053829  389648 system_pods.go:89] "csi-hostpath-attacher-0" [a2237a13-ee99-438b-8cf6-3c88c95e7d6b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0904 20:57:05.053834  389648 system_pods.go:89] "csi-hostpath-resizer-0" [42e75435-91d0-4b2b-a7f7-5dc7aed8258e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0904 20:57:05.053840  389648 system_pods.go:89] "csi-hostpathplugin-98s7l" [0f04c863-d08f-4ce8-8aba-f8431c710bad] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0904 20:57:05.053846  389648 system_pods.go:89] "etcd-addons-049370" [a5b5bef6-6ee7-457c-8ed6-7fcd25717290] Running
	I0904 20:57:05.053850  389648 system_pods.go:89] "kindnet-7bfb9" [255cca7e-1239-404f-a063-333e80f7b32c] Running
	I0904 20:57:05.053854  389648 system_pods.go:89] "kube-apiserver-addons-049370" [1bf36e27-c0a6-4950-9554-56e2abea00d9] Running
	I0904 20:57:05.053858  389648 system_pods.go:89] "kube-controller-manager-addons-049370" [10f65f3d-208b-4468-88b3-7bd1f31a99ef] Running
	I0904 20:57:05.053863  389648 system_pods.go:89] "kube-ingress-dns-minikube" [1f14a514-2ff6-4572-97f7-91a8ef6a1e64] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0904 20:57:05.053871  389648 system_pods.go:89] "kube-proxy-k5lnm" [f9894455-f642-473d-95bc-6d2aeccae2cf] Running
	I0904 20:57:05.053875  389648 system_pods.go:89] "kube-scheduler-addons-049370" [a95d0b5b-fd44-47f5-bf4d-55c2038da3de] Running
	I0904 20:57:05.053880  389648 system_pods.go:89] "metrics-server-85b7d694d7-kgh4z" [85361692-2630-41a0-bd24-f432d8ed4a23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 20:57:05.053887  389648 system_pods.go:89] "nvidia-device-plugin-daemonset-kqhfs" [d73bdd1f-d0db-435b-89c5-fc48b0ac0590] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0904 20:57:05.053893  389648 system_pods.go:89] "registry-66898fdd98-2lbsx" [7bb317fd-9e98-41e6-a9bd-15826d701411] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0904 20:57:05.053900  389648 system_pods.go:89] "registry-creds-764b6fb674-zkcq2" [ba1e4f2c-5cae-4b0d-be29-bb07606db48c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0904 20:57:05.053905  389648 system_pods.go:89] "registry-proxy-84zdv" [83f3d976-f7e6-44f0-a75e-408cd4584cfd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0904 20:57:05.053912  389648 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5d9jh" [c0db4104-0944-4863-b900-e7db691bf3f3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 20:57:05.053918  389648 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mgxvk" [afd9e3b4-4dc1-41fb-ae65-96ec88d7c7b8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 20:57:05.053924  389648 system_pods.go:89] "storage-provisioner" [8fd9befb-6934-4cdc-bb01-07e92f780dfa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 20:57:05.053939  389648 retry.go:31] will retry after 484.806352ms: missing components: kube-dns
	I0904 20:57:05.153022  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:05.153142  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:05.363067  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:05.460377  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:05.543458  389648 system_pods.go:86] 20 kube-system pods found
	I0904 20:57:05.543495  389648 system_pods.go:89] "amd-gpu-device-plugin-ddj6n" [dd553d62-6014-4d9a-b40a-e960cd942c31] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0904 20:57:05.543501  389648 system_pods.go:89] "coredns-66bc5c9577-m8z9t" [461c8bca-3775-4a4e-a6ea-46896415415c] Running
	I0904 20:57:05.543508  389648 system_pods.go:89] "csi-hostpath-attacher-0" [a2237a13-ee99-438b-8cf6-3c88c95e7d6b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0904 20:57:05.543514  389648 system_pods.go:89] "csi-hostpath-resizer-0" [42e75435-91d0-4b2b-a7f7-5dc7aed8258e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0904 20:57:05.543520  389648 system_pods.go:89] "csi-hostpathplugin-98s7l" [0f04c863-d08f-4ce8-8aba-f8431c710bad] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0904 20:57:05.543525  389648 system_pods.go:89] "etcd-addons-049370" [a5b5bef6-6ee7-457c-8ed6-7fcd25717290] Running
	I0904 20:57:05.543530  389648 system_pods.go:89] "kindnet-7bfb9" [255cca7e-1239-404f-a063-333e80f7b32c] Running
	I0904 20:57:05.543542  389648 system_pods.go:89] "kube-apiserver-addons-049370" [1bf36e27-c0a6-4950-9554-56e2abea00d9] Running
	I0904 20:57:05.543552  389648 system_pods.go:89] "kube-controller-manager-addons-049370" [10f65f3d-208b-4468-88b3-7bd1f31a99ef] Running
	I0904 20:57:05.543557  389648 system_pods.go:89] "kube-ingress-dns-minikube" [1f14a514-2ff6-4572-97f7-91a8ef6a1e64] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0904 20:57:05.543563  389648 system_pods.go:89] "kube-proxy-k5lnm" [f9894455-f642-473d-95bc-6d2aeccae2cf] Running
	I0904 20:57:05.543567  389648 system_pods.go:89] "kube-scheduler-addons-049370" [a95d0b5b-fd44-47f5-bf4d-55c2038da3de] Running
	I0904 20:57:05.543571  389648 system_pods.go:89] "metrics-server-85b7d694d7-kgh4z" [85361692-2630-41a0-bd24-f432d8ed4a23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 20:57:05.543579  389648 system_pods.go:89] "nvidia-device-plugin-daemonset-kqhfs" [d73bdd1f-d0db-435b-89c5-fc48b0ac0590] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0904 20:57:05.543585  389648 system_pods.go:89] "registry-66898fdd98-2lbsx" [7bb317fd-9e98-41e6-a9bd-15826d701411] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0904 20:57:05.543593  389648 system_pods.go:89] "registry-creds-764b6fb674-zkcq2" [ba1e4f2c-5cae-4b0d-be29-bb07606db48c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0904 20:57:05.543598  389648 system_pods.go:89] "registry-proxy-84zdv" [83f3d976-f7e6-44f0-a75e-408cd4584cfd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0904 20:57:05.543605  389648 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5d9jh" [c0db4104-0944-4863-b900-e7db691bf3f3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 20:57:05.543612  389648 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mgxvk" [afd9e3b4-4dc1-41fb-ae65-96ec88d7c7b8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 20:57:05.543618  389648 system_pods.go:89] "storage-provisioner" [8fd9befb-6934-4cdc-bb01-07e92f780dfa] Running
	I0904 20:57:05.543626  389648 system_pods.go:126] duration metric: took 1.687941335s to wait for k8s-apps to be running ...
	I0904 20:57:05.543650  389648 system_svc.go:44] waiting for kubelet service to be running ....
	I0904 20:57:05.543694  389648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 20:57:05.555385  389648 system_svc.go:56] duration metric: took 11.725653ms WaitForService to wait for kubelet
	I0904 20:57:05.555412  389648 kubeadm.go:578] duration metric: took 46.433531844s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 20:57:05.555439  389648 node_conditions.go:102] verifying NodePressure condition ...
	I0904 20:57:05.558136  389648 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0904 20:57:05.558169  389648 node_conditions.go:123] node cpu capacity is 8
	I0904 20:57:05.558187  389648 node_conditions.go:105] duration metric: took 2.741859ms to run NodePressure ...
	I0904 20:57:05.558203  389648 start.go:241] waiting for startup goroutines ...
	I0904 20:57:05.653335  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:05.653493  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:05.862594  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:05.960405  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:06.155853  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:06.155860  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:06.363166  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:06.460689  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:06.653352  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:06.653395  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:06.862486  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:06.960974  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:07.152583  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:07.152693  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:07.362526  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:07.461234  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:07.653353  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:07.653430  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:07.862588  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:07.961373  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:08.153869  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:08.153919  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:08.363098  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:08.460845  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:08.652618  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:08.652818  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:08.863708  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:08.961239  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:09.153619  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:09.153661  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:09.363027  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:09.461172  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:09.653178  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:09.653259  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:09.862455  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:09.961183  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:10.153505  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:10.153868  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:10.362513  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:10.460913  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:10.653892  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:10.654021  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:10.863179  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:10.961003  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:11.152924  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:11.152937  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:11.363254  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:11.460435  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:11.653707  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:11.653749  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:11.862653  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:11.960670  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:12.153474  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:12.153582  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:12.362607  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:12.460401  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:12.653547  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:12.653621  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:12.863488  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:12.961428  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:13.153780  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:13.153926  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:13.363601  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:13.463509  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:13.653590  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:13.653721  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:13.863091  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:13.960747  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:14.156722  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:14.156892  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:14.363724  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:14.460915  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:14.652850  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:14.652930  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:14.863379  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:14.960898  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:15.153105  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:15.153190  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:15.364645  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:15.466746  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:15.653529  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:15.653552  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:15.863473  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:15.961399  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:16.153418  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:16.153633  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:16.365659  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:16.460427  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:16.655316  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:16.656314  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:16.863846  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:16.960170  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:17.153040  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:17.153440  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:17.362488  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:17.461324  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:17.653058  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:17.653099  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:17.862919  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:17.960632  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:18.153699  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:18.153804  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:18.362710  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:18.460244  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:18.653100  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:18.653412  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:18.862825  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:18.963826  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:19.034934  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:57:19.152876  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:19.153003  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:19.363216  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:19.461101  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:19.654705  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:19.654966  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:19.862758  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:19.960238  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 20:57:19.965214  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:57:19.965249  389648 retry.go:31] will retry after 20.693378838s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:57:20.153317  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:20.153424  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:20.362498  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:20.461668  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:20.653715  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:20.653849  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:20.862660  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:20.960422  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:21.153279  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:21.153367  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:21.362521  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:21.461453  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:21.653611  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:21.653616  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:21.862958  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:21.960988  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:22.152881  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:22.152896  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:22.362933  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:22.460865  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:22.652773  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:22.652825  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:22.862669  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:22.960462  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:23.153822  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:23.154026  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:23.362981  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:23.460282  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:23.653482  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:23.653565  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:23.862339  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:23.960741  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:24.153397  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:24.153562  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:24.362213  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:24.460604  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:24.653463  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:24.653585  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:24.862661  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:24.960921  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:25.152671  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:25.152676  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:25.362282  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:25.460981  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:25.652991  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:25.653126  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:25.863187  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:25.960971  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:26.155115  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:26.155549  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:26.364641  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:26.460565  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:26.653351  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:26.653460  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:26.862335  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:26.961215  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:27.153245  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:27.153382  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:27.362420  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:27.460886  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:27.652946  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:27.653004  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:27.862794  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:27.960433  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:28.153554  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:28.153563  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:28.362061  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:28.460951  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:28.653077  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:28.653166  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:28.862812  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:28.960910  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:29.152712  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:29.152713  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:29.362969  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:29.460457  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:29.653716  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:29.653816  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:29.862674  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:29.960527  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:30.153309  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:30.153467  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:30.364159  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:30.465320  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:30.653741  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:30.653775  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:30.862640  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:30.963437  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:31.153238  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:31.153259  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:31.362036  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:31.460565  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:31.653248  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:31.653298  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:31.863300  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:31.960651  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:32.153238  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:32.153326  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:32.362194  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:32.460483  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:32.653633  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:32.653670  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:32.862856  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:32.960920  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:33.163353  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:33.163571  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:33.363807  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:33.463275  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:33.661398  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:33.661866  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:34.067754  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:34.158198  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:34.252681  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:34.267829  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:34.462230  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:34.462566  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:34.655260  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:34.655318  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:34.862629  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:34.960553  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:35.153838  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:35.153871  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:35.363148  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:35.461050  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:35.653525  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:35.653658  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:35.864175  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:35.961508  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:36.154202  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:36.154257  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:36.363162  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:36.460840  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:36.653022  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:36.653219  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:36.863704  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:36.960613  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:37.153938  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:37.153958  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:37.363084  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:37.461050  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:37.652708  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:37.652726  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:37.862959  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:37.960607  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:38.153906  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:38.154265  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:38.363779  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:38.460618  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:38.653662  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:38.653739  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:38.862850  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:38.960535  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:39.153828  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:39.153870  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:39.363192  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:39.461549  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:39.653371  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:39.653594  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:39.862436  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:39.961060  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:40.153255  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:40.153265  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:40.362463  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:40.461112  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:40.653195  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:40.653238  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:40.659168  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:57:40.863485  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:40.961390  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:41.153507  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:41.153683  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:41.363294  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:41.460511  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 20:57:41.586847  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:57:41.586876  389648 retry.go:31] will retry after 18.584233469s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:57:41.653116  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:41.653297  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:41.864041  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:41.960341  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:42.153090  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:42.153093  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:42.363050  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:42.460434  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:42.653587  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:42.653634  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:42.862872  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:42.960883  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:43.153266  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:43.153570  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:43.362999  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:43.460713  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:43.653498  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:43.653565  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:43.862351  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:43.960779  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:44.152645  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:44.152744  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:44.362647  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:44.460216  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:44.653789  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:44.654025  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:44.863259  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:44.961105  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:45.153229  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:45.153267  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:45.363497  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:45.461501  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:45.653400  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:45.653589  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:45.862262  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:45.960864  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:46.152860  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:46.152890  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:46.363058  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:46.460848  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:46.653051  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:46.653077  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:46.863163  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:46.960859  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:47.153234  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:47.153238  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:47.363116  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:47.460543  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:47.653774  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:47.653836  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:47.863023  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:47.961011  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:48.153044  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:48.153183  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:48.363514  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:48.461320  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:48.653777  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:48.653858  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:48.862550  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:48.961142  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:49.153028  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:49.153220  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:49.362652  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:49.459891  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:49.653164  389648 kapi.go:107] duration metric: took 1m24.503386944s to wait for kubernetes.io/minikube-addons=registry ...
	I0904 20:57:49.653212  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:49.862954  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:49.960303  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:50.153422  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:50.362439  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:50.460798  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:50.653419  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:50.862686  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:50.960970  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:51.154179  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:51.363166  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:51.460875  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:51.652647  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:51.863070  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:51.960526  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:52.153813  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:52.362711  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:52.460087  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:52.653154  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:52.863206  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:52.960823  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:53.153125  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:53.363443  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:53.461004  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:53.656643  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:53.866801  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:53.961469  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:54.153974  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:54.364415  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:54.461643  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:54.655730  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:54.867016  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:54.961177  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:55.155271  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:55.363462  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:55.461909  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:55.654080  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:55.862506  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:55.962401  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:56.153639  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:56.363134  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:56.460790  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:56.653986  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:56.862951  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:56.959890  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:57.152935  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:57.363141  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:57.460860  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:57.653029  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:57.863171  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:57.961135  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:58.153239  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:58.363391  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:58.460583  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:58.654112  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:58.863905  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:58.960604  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:59.153765  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:59.363398  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:59.460827  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:59.653240  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:59.863414  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:59.960740  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:00.154243  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:58:00.172145  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:58:00.363535  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:00.460166  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:00.653062  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:58:00.863155  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:00.960597  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:01.153145  389648 kapi.go:107] duration metric: took 1m36.0033494s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0904 20:58:01.362175  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.189987346s)
	W0904 20:58:01.362237  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0904 20:58:01.362358  389648 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0904 20:58:01.377323  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:01.461301  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:01.862664  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:01.960172  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:02.362264  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:02.460782  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:02.863228  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:02.960690  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:03.362947  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:03.461061  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:03.863740  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:03.960182  389648 kapi.go:107] duration metric: took 1m34.502765752s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0904 20:58:03.962033  389648 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-049370 cluster.
	I0904 20:58:03.963517  389648 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0904 20:58:03.964745  389648 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0904 20:58:04.362552  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:04.863544  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:05.363523  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:05.862668  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:06.363450  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:06.862835  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:07.363579  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:07.862482  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:08.362742  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:08.863840  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:09.365433  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:09.862609  389648 kapi.go:107] duration metric: took 1m43.503273609s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0904 20:58:09.864811  389648 out.go:179] * Enabled addons: amd-gpu-device-plugin, storage-provisioner, ingress-dns, registry-creds, nvidia-device-plugin, default-storageclass, cloud-spanner, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0904 20:58:09.865999  389648 addons.go:514] duration metric: took 1m50.744105832s for enable addons: enabled=[amd-gpu-device-plugin storage-provisioner ingress-dns registry-creds nvidia-device-plugin default-storageclass cloud-spanner metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0904 20:58:09.866049  389648 start.go:246] waiting for cluster config update ...
	I0904 20:58:09.866079  389648 start.go:255] writing updated cluster config ...
	I0904 20:58:09.866376  389648 ssh_runner.go:195] Run: rm -f paused
	I0904 20:58:09.869857  389648 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 20:58:09.872605  389648 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-m8z9t" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:09.876507  389648 pod_ready.go:94] pod "coredns-66bc5c9577-m8z9t" is "Ready"
	I0904 20:58:09.876529  389648 pod_ready.go:86] duration metric: took 3.904383ms for pod "coredns-66bc5c9577-m8z9t" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:09.878366  389648 pod_ready.go:83] waiting for pod "etcd-addons-049370" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:09.881658  389648 pod_ready.go:94] pod "etcd-addons-049370" is "Ready"
	I0904 20:58:09.881678  389648 pod_ready.go:86] duration metric: took 3.291911ms for pod "etcd-addons-049370" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:09.883326  389648 pod_ready.go:83] waiting for pod "kube-apiserver-addons-049370" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:09.886438  389648 pod_ready.go:94] pod "kube-apiserver-addons-049370" is "Ready"
	I0904 20:58:09.886456  389648 pod_ready.go:86] duration metric: took 3.11401ms for pod "kube-apiserver-addons-049370" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:09.888020  389648 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-049370" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:10.273761  389648 pod_ready.go:94] pod "kube-controller-manager-addons-049370" is "Ready"
	I0904 20:58:10.273790  389648 pod_ready.go:86] duration metric: took 385.749346ms for pod "kube-controller-manager-addons-049370" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:10.473572  389648 pod_ready.go:83] waiting for pod "kube-proxy-k5lnm" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:10.873887  389648 pod_ready.go:94] pod "kube-proxy-k5lnm" is "Ready"
	I0904 20:58:10.873914  389648 pod_ready.go:86] duration metric: took 400.319117ms for pod "kube-proxy-k5lnm" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:11.074268  389648 pod_ready.go:83] waiting for pod "kube-scheduler-addons-049370" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:11.473936  389648 pod_ready.go:94] pod "kube-scheduler-addons-049370" is "Ready"
	I0904 20:58:11.473971  389648 pod_ready.go:86] duration metric: took 399.67197ms for pod "kube-scheduler-addons-049370" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:11.473987  389648 pod_ready.go:40] duration metric: took 1.604097075s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 20:58:11.514779  389648 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0904 20:58:11.516435  389648 out.go:179] * Done! kubectl is now configured to use "addons-049370" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 04 21:06:00 addons-049370 crio[1043]: time="2025-09-04 21:06:00.977969962Z" level=info msg="Image docker.io/nginx:alpine not found" id=1401989d-2c73-4c63-9f7d-6db64bb31cb6 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:06:14 addons-049370 crio[1043]: time="2025-09-04 21:06:14.408732106Z" level=info msg="Stopping pod sandbox: f325578cefe27afec9f99d7abc395bec79133dc4ab72005ede61cedb5a3c901e" id=3b7159d9-ff4f-40f3-848b-72e207246507 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 04 21:06:14 addons-049370 crio[1043]: time="2025-09-04 21:06:14.408808039Z" level=info msg="Stopped pod sandbox (already stopped): f325578cefe27afec9f99d7abc395bec79133dc4ab72005ede61cedb5a3c901e" id=3b7159d9-ff4f-40f3-848b-72e207246507 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 04 21:06:14 addons-049370 crio[1043]: time="2025-09-04 21:06:14.409119877Z" level=info msg="Removing pod sandbox: f325578cefe27afec9f99d7abc395bec79133dc4ab72005ede61cedb5a3c901e" id=e9d468d0-0898-406d-91cf-cbca998fae39 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 04 21:06:14 addons-049370 crio[1043]: time="2025-09-04 21:06:14.415347802Z" level=info msg="Removed pod sandbox: f325578cefe27afec9f99d7abc395bec79133dc4ab72005ede61cedb5a3c901e" id=e9d468d0-0898-406d-91cf-cbca998fae39 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 04 21:06:14 addons-049370 crio[1043]: time="2025-09-04 21:06:14.415658898Z" level=info msg="Stopping pod sandbox: 06eca301ea94b5887eaab98ade1cdc232fc773a67b8202a88d698c50c6368469" id=76268925-5c5f-46a4-b769-f1429002e47a name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 04 21:06:14 addons-049370 crio[1043]: time="2025-09-04 21:06:14.415693830Z" level=info msg="Stopped pod sandbox (already stopped): 06eca301ea94b5887eaab98ade1cdc232fc773a67b8202a88d698c50c6368469" id=76268925-5c5f-46a4-b769-f1429002e47a name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 04 21:06:14 addons-049370 crio[1043]: time="2025-09-04 21:06:14.415993198Z" level=info msg="Removing pod sandbox: 06eca301ea94b5887eaab98ade1cdc232fc773a67b8202a88d698c50c6368469" id=c8002c5a-f91e-4db0-9d39-37e9663b05d5 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 04 21:06:14 addons-049370 crio[1043]: time="2025-09-04 21:06:14.421408262Z" level=info msg="Removed pod sandbox: 06eca301ea94b5887eaab98ade1cdc232fc773a67b8202a88d698c50c6368469" id=c8002c5a-f91e-4db0-9d39-37e9663b05d5 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 04 21:06:14 addons-049370 crio[1043]: time="2025-09-04 21:06:14.421741040Z" level=info msg="Stopping pod sandbox: c6e94adfea0879c920e5b1043a4741402a2d83dd530ee8cc09497bffff194810" id=a90c9033-9e0b-4f46-b0be-1ba8e3715dcb name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 04 21:06:14 addons-049370 crio[1043]: time="2025-09-04 21:06:14.421767907Z" level=info msg="Stopped pod sandbox (already stopped): c6e94adfea0879c920e5b1043a4741402a2d83dd530ee8cc09497bffff194810" id=a90c9033-9e0b-4f46-b0be-1ba8e3715dcb name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 04 21:06:14 addons-049370 crio[1043]: time="2025-09-04 21:06:14.422036977Z" level=info msg="Removing pod sandbox: c6e94adfea0879c920e5b1043a4741402a2d83dd530ee8cc09497bffff194810" id=278bf758-5741-4721-8f8c-5b282bb8b23b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 04 21:06:14 addons-049370 crio[1043]: time="2025-09-04 21:06:14.428335213Z" level=info msg="Removed pod sandbox: c6e94adfea0879c920e5b1043a4741402a2d83dd530ee8cc09497bffff194810" id=278bf758-5741-4721-8f8c-5b282bb8b23b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 04 21:06:14 addons-049370 crio[1043]: time="2025-09-04 21:06:14.429824687Z" level=info msg="Stopping pod sandbox: a16557be7ddd829d46b73bb6b28a19e23c2fa203787c36e0f1315533ed69ff17" id=fa915718-5089-4394-a12e-3a2d560a4cdd name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 04 21:06:14 addons-049370 crio[1043]: time="2025-09-04 21:06:14.429862834Z" level=info msg="Stopped pod sandbox (already stopped): a16557be7ddd829d46b73bb6b28a19e23c2fa203787c36e0f1315533ed69ff17" id=fa915718-5089-4394-a12e-3a2d560a4cdd name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 04 21:06:14 addons-049370 crio[1043]: time="2025-09-04 21:06:14.430181337Z" level=info msg="Removing pod sandbox: a16557be7ddd829d46b73bb6b28a19e23c2fa203787c36e0f1315533ed69ff17" id=0e42a394-c140-4f77-a3e9-15f60d5ac2ee name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 04 21:06:14 addons-049370 crio[1043]: time="2025-09-04 21:06:14.436465741Z" level=info msg="Removed pod sandbox: a16557be7ddd829d46b73bb6b28a19e23c2fa203787c36e0f1315533ed69ff17" id=0e42a394-c140-4f77-a3e9-15f60d5ac2ee name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 04 21:06:14 addons-049370 crio[1043]: time="2025-09-04 21:06:14.436811205Z" level=info msg="Stopping pod sandbox: cd89e12ceb21a0f7d9da3fd0b919dff01a80975a1043344a9019c2bf548576a0" id=fbf7d22e-8c67-4189-9a8e-8a9438911053 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 04 21:06:14 addons-049370 crio[1043]: time="2025-09-04 21:06:14.436846520Z" level=info msg="Stopped pod sandbox (already stopped): cd89e12ceb21a0f7d9da3fd0b919dff01a80975a1043344a9019c2bf548576a0" id=fbf7d22e-8c67-4189-9a8e-8a9438911053 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 04 21:06:14 addons-049370 crio[1043]: time="2025-09-04 21:06:14.437129509Z" level=info msg="Removing pod sandbox: cd89e12ceb21a0f7d9da3fd0b919dff01a80975a1043344a9019c2bf548576a0" id=c4870d7f-35d9-41a3-8220-c09d8d388ab4 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 04 21:06:14 addons-049370 crio[1043]: time="2025-09-04 21:06:14.443373302Z" level=info msg="Removed pod sandbox: cd89e12ceb21a0f7d9da3fd0b919dff01a80975a1043344a9019c2bf548576a0" id=c4870d7f-35d9-41a3-8220-c09d8d388ab4 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 04 21:06:15 addons-049370 crio[1043]: time="2025-09-04 21:06:15.978688797Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=d0626f2d-6a34-4e9a-b07f-fc09dedd5f77 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:06:15 addons-049370 crio[1043]: time="2025-09-04 21:06:15.978941267Z" level=info msg="Image docker.io/nginx:alpine not found" id=d0626f2d-6a34-4e9a-b07f-fc09dedd5f77 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:06:28 addons-049370 crio[1043]: time="2025-09-04 21:06:28.977538928Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=2ed87b73-3722-4ee6-ac70-fb43b387b28c name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:06:28 addons-049370 crio[1043]: time="2025-09-04 21:06:28.977786287Z" level=info msg="Image docker.io/nginx:alpine not found" id=2ed87b73-3722-4ee6-ac70-fb43b387b28c name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0812830cff5e8       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          8 minutes ago       Running             busybox                   0                   9db653f3755b4       busybox
	71f3c44efa7ed       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             8 minutes ago       Running             controller                0                   616b907580ffe       ingress-nginx-controller-9cc49f96f-9hj2l
	7edf2c6fe20a3       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506            8 minutes ago       Running             gadget                    0                   c4ec61756e1cd       gadget-whkft
	3ba8ba2525962       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   9 minutes ago       Exited              patch                     0                   6e76b5fa98c54       ingress-nginx-admission-patch-gtdvl
	4d5989f69feeb       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   9 minutes ago       Exited              create                    0                   8ab625b3a8d0f       ingress-nginx-admission-create-bcplk
	7d2cafb9fbef5       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               9 minutes ago       Running             minikube-ingress-dns      0                   ab8997b22bdfa       kube-ingress-dns-minikube
	ae86f0dc5f527       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             9 minutes ago       Running             local-path-provisioner    0                   88ad798d96077       local-path-provisioner-648f6765c9-dlgrh
	5a078a0cc821d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             9 minutes ago       Running             storage-provisioner       0                   789a7bd2ea563       storage-provisioner
	f34769614a539       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             9 minutes ago       Running             coredns                   0                   4201e6440890f       coredns-66bc5c9577-m8z9t
	c934f0f4b966c       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                             10 minutes ago      Running             kindnet-cni               0                   15477ade7fdb4       kindnet-7bfb9
	f6a9e9c72d6ba       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                             10 minutes ago      Running             kube-proxy                0                   8022b4762a732       kube-proxy-k5lnm
	3f2b5739caaa5       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             10 minutes ago      Running             etcd                      0                   a0a640c2dfdf7       etcd-addons-049370
	c29c83b9956a1       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                             10 minutes ago      Running             kube-scheduler            0                   dcb7c5c1869a2       kube-scheduler-addons-049370
	c5667de904598       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                             10 minutes ago      Running             kube-controller-manager   0                   8e65b647d075e       kube-controller-manager-addons-049370
	e754d67808d98       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                             10 minutes ago      Running             kube-apiserver            0                   ded69ea3b436b       kube-apiserver-addons-049370
	
	
	==> coredns [f34769614a539f8a9deabe583e02287082f6ea11bf18d071546e1a719cab9a53] <==
	[INFO] 10.244.0.19:55773 - 30973 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.004937086s
	[INFO] 10.244.0.19:54298 - 21135 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004015449s
	[INFO] 10.244.0.19:54298 - 21393 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005610527s
	[INFO] 10.244.0.19:47617 - 25189 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.0041739s
	[INFO] 10.244.0.19:47617 - 25434 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004264819s
	[INFO] 10.244.0.19:52948 - 36411 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000103838s
	[INFO] 10.244.0.19:52948 - 36178 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000155332s
	[INFO] 10.244.0.22:41475 - 45724 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000188468s
	[INFO] 10.244.0.22:38337 - 8650 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000284766s
	[INFO] 10.244.0.22:56826 - 5154 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000151577s
	[INFO] 10.244.0.22:34009 - 20877 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000175171s
	[INFO] 10.244.0.22:47086 - 56751 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000091852s
	[INFO] 10.244.0.22:45919 - 6012 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000095085s
	[INFO] 10.244.0.22:37872 - 33595 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003127291s
	[INFO] 10.244.0.22:58544 - 3234 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003586803s
	[INFO] 10.244.0.22:34813 - 19895 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.004201792s
	[INFO] 10.244.0.22:60988 - 58217 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.005450137s
	[INFO] 10.244.0.22:55463 - 22980 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005077908s
	[INFO] 10.244.0.22:35577 - 40764 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005725827s
	[INFO] 10.244.0.22:55201 - 19501 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00586828s
	[INFO] 10.244.0.22:47590 - 18187 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.009774018s
	[INFO] 10.244.0.22:43687 - 40215 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000894336s
	[INFO] 10.244.0.22:40249 - 16957 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001679777s
	[INFO] 10.244.0.26:43025 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000196657s
	[INFO] 10.244.0.26:48775 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00015597s
	
	
	==> describe nodes <==
	Name:               addons-049370
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-049370
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d82f852837f628b3930700b19196c39855cd258a
	                    minikube.k8s.io/name=addons-049370
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_04T20_56_14_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-049370
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Sep 2025 20:56:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-049370
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Sep 2025 21:06:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Sep 2025 21:05:05 +0000   Thu, 04 Sep 2025 20:56:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Sep 2025 21:05:05 +0000   Thu, 04 Sep 2025 20:56:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Sep 2025 21:05:05 +0000   Thu, 04 Sep 2025 20:56:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Sep 2025 21:05:05 +0000   Thu, 04 Sep 2025 20:57:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-049370
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a303a7b5bdc4444fa740fba6d81d7a69
	  System UUID:                e0421e3f-022c-4346-89b0-92bd27eff9ea
	  Boot ID:                    d34ed5fc-a148-45de-9a0e-f744d5f792e8
	  Kernel Version:             5.15.0-1083-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m19s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m2s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m17s
	  gadget                      gadget-whkft                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-9hj2l    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-m8z9t                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-addons-049370                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-7bfb9                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-addons-049370                250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-049370       200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-k5lnm                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-049370                100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  local-path-storage          local-path-provisioner-648f6765c9-dlgrh     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 10m    kube-proxy       
	  Normal   Starting                 10m    kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m    kubelet          Node addons-049370 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m    kubelet          Node addons-049370 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m    kubelet          Node addons-049370 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m    node-controller  Node addons-049370 event: Registered Node addons-049370 in Controller
	  Normal   NodeReady                9m28s  kubelet          Node addons-049370 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000004] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +0.000069] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000004] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +1.008573] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000007] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +0.000008] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000003] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000001] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +2.015727] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000007] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +0.000000] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000003] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +4.127589] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000006] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +0.000000] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000002] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +8.191103] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000006] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +0.000017] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000002] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	
	
	==> etcd [3f2b5739caaa53e307caf9baa0ce3898f9c7585d8d2ae3924c36566f18f3e2c1] <==
	{"level":"warn","ts":"2025-09-04T20:56:26.851373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T20:56:26.859519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T20:56:48.396241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T20:56:48.402544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T20:56:48.546682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T20:56:48.553504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55094","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-04T20:57:34.056678Z","caller":"traceutil/trace.go:172","msg":"trace[1745603704] transaction","detail":"{read_only:false; response_revision:1077; number_of_response:1; }","duration":"195.917023ms","start":"2025-09-04T20:57:33.860734Z","end":"2025-09-04T20:57:34.056651Z","steps":["trace[1745603704] 'process raft request'  (duration: 195.733375ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T20:57:34.056844Z","caller":"traceutil/trace.go:172","msg":"trace[278916272] transaction","detail":"{read_only:false; response_revision:1079; number_of_response:1; }","duration":"106.019959ms","start":"2025-09-04T20:57:33.950812Z","end":"2025-09-04T20:57:34.056832Z","steps":["trace[278916272] 'process raft request'  (duration: 105.809945ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T20:57:34.057108Z","caller":"traceutil/trace.go:172","msg":"trace[325232300] transaction","detail":"{read_only:false; response_revision:1078; number_of_response:1; }","duration":"111.429632ms","start":"2025-09-04T20:57:33.945667Z","end":"2025-09-04T20:57:34.057097Z","steps":["trace[325232300] 'process raft request'  (duration: 110.913633ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T20:58:13.409295Z","caller":"traceutil/trace.go:172","msg":"trace[1613289946] transaction","detail":"{read_only:false; response_revision:1261; number_of_response:1; }","duration":"116.026957ms","start":"2025-09-04T20:58:13.293249Z","end":"2025-09-04T20:58:13.409276Z","steps":["trace[1613289946] 'process raft request'  (duration: 115.923335ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-04T20:58:30.444195Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"127.156243ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-04T20:58:30.444268Z","caller":"traceutil/trace.go:172","msg":"trace[447435360] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotclasses; range_end:; response_count:0; response_revision:1361; }","duration":"127.240307ms","start":"2025-09-04T20:58:30.317014Z","end":"2025-09-04T20:58:30.444254Z","steps":["trace[447435360] 'agreement among raft nodes before linearized reading'  (duration: 44.055437ms)","trace[447435360] 'range keys from in-memory index tree'  (duration: 83.073918ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-04T20:58:30.444209Z","caller":"traceutil/trace.go:172","msg":"trace[2038104867] transaction","detail":"{read_only:false; response_revision:1362; number_of_response:1; }","duration":"131.730807ms","start":"2025-09-04T20:58:30.312459Z","end":"2025-09-04T20:58:30.444190Z","steps":["trace[2038104867] 'process raft request'  (duration: 48.653692ms)","trace[2038104867] 'compare'  (duration: 82.954949ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-04T20:58:30.642255Z","caller":"traceutil/trace.go:172","msg":"trace[822073905] transaction","detail":"{read_only:false; response_revision:1367; number_of_response:1; }","duration":"111.76712ms","start":"2025-09-04T20:58:30.530471Z","end":"2025-09-04T20:58:30.642238Z","steps":["trace[822073905] 'process raft request'  (duration: 111.724252ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T20:58:30.642413Z","caller":"traceutil/trace.go:172","msg":"trace[444267485] transaction","detail":"{read_only:false; response_revision:1366; number_of_response:1; }","duration":"173.99749ms","start":"2025-09-04T20:58:30.468390Z","end":"2025-09-04T20:58:30.642388Z","steps":["trace[444267485] 'process raft request'  (duration: 79.867478ms)","trace[444267485] 'compare'  (duration: 93.793378ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-04T20:58:30.701102Z","caller":"traceutil/trace.go:172","msg":"trace[1596308249] transaction","detail":"{read_only:false; response_revision:1368; number_of_response:1; }","duration":"114.975474ms","start":"2025-09-04T20:58:30.586109Z","end":"2025-09-04T20:58:30.701084Z","steps":["trace[1596308249] 'process raft request'  (duration: 114.88463ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-04T20:58:30.831980Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.440231ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-04T20:58:30.832128Z","caller":"traceutil/trace.go:172","msg":"trace[1697395868] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:1369; }","duration":"126.599135ms","start":"2025-09-04T20:58:30.705510Z","end":"2025-09-04T20:58:30.832110Z","steps":["trace[1697395868] 'agreement among raft nodes before linearized reading'  (duration: 66.572708ms)","trace[1697395868] 'range keys from in-memory index tree'  (duration: 59.837538ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-04T20:58:30.832171Z","caller":"traceutil/trace.go:172","msg":"trace[645371951] transaction","detail":"{read_only:false; response_revision:1370; number_of_response:1; }","duration":"127.267701ms","start":"2025-09-04T20:58:30.704881Z","end":"2025-09-04T20:58:30.832149Z","steps":["trace[645371951] 'process raft request'  (duration: 67.258003ms)","trace[645371951] 'compare'  (duration: 59.843945ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-04T20:58:30.832360Z","caller":"traceutil/trace.go:172","msg":"trace[1978939171] transaction","detail":"{read_only:false; response_revision:1371; number_of_response:1; }","duration":"127.399776ms","start":"2025-09-04T20:58:30.704948Z","end":"2025-09-04T20:58:30.832348Z","steps":["trace[1978939171] 'process raft request'  (duration: 127.166902ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T20:58:30.832409Z","caller":"traceutil/trace.go:172","msg":"trace[1686127060] transaction","detail":"{read_only:false; response_revision:1372; number_of_response:1; }","duration":"126.865141ms","start":"2025-09-04T20:58:30.705526Z","end":"2025-09-04T20:58:30.832392Z","steps":["trace[1686127060] 'process raft request'  (duration: 126.765828ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T20:58:30.836024Z","caller":"traceutil/trace.go:172","msg":"trace[1408390840] transaction","detail":"{read_only:false; response_revision:1373; number_of_response:1; }","duration":"126.512815ms","start":"2025-09-04T20:58:30.705957Z","end":"2025-09-04T20:58:30.832469Z","steps":["trace[1408390840] 'process raft request'  (duration: 126.396725ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T21:06:10.083426Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1833}
	{"level":"info","ts":"2025-09-04T21:06:10.106295Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1833,"took":"22.240917ms","hash":3188799037,"current-db-size-bytes":6397952,"current-db-size":"6.4 MB","current-db-size-in-use-bytes":4169728,"current-db-size-in-use":"4.2 MB"}
	{"level":"info","ts":"2025-09-04T21:06:10.106336Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3188799037,"revision":1833,"compact-revision":-1}
	
	
	==> kernel <==
	 21:06:31 up  2:49,  0 users,  load average: 0.74, 0.77, 0.60
	Linux addons-049370 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [c934f0f4b966c80bea5021ff2cd61d60fc1f09abb35b790b7fa2c052eb648772] <==
	I0904 21:04:23.572931       1 main.go:301] handling current node
	I0904 21:04:33.572869       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:04:33.572909       1 main.go:301] handling current node
	I0904 21:04:43.567579       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:04:43.567610       1 main.go:301] handling current node
	I0904 21:04:53.572848       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:04:53.572882       1 main.go:301] handling current node
	I0904 21:05:03.568245       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:05:03.568285       1 main.go:301] handling current node
	I0904 21:05:13.568869       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:05:13.568906       1 main.go:301] handling current node
	I0904 21:05:23.568887       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:05:23.568940       1 main.go:301] handling current node
	I0904 21:05:33.568623       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:05:33.568661       1 main.go:301] handling current node
	I0904 21:05:43.567774       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:05:43.567804       1 main.go:301] handling current node
	I0904 21:05:53.570580       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:05:53.570625       1 main.go:301] handling current node
	I0904 21:06:03.568840       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:06:03.568881       1 main.go:301] handling current node
	I0904 21:06:13.568304       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:06:13.568339       1 main.go:301] handling current node
	I0904 21:06:23.568843       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:06:23.568901       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e754d67808d98a38d816120e6f2508d9bc342968fa147d926ff9d362a0796737] <==
	I0904 21:00:11.575196       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:00:37.217035       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:01:12.003193       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:01:57.163986       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:02:17.079567       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:03:09.847622       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:03:20.654283       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:04:19.146103       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:04:32.303972       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:05:17.501455       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 21:05:17.501506       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0904 21:05:17.515690       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 21:05:17.515829       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0904 21:05:17.516879       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 21:05:17.516916       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0904 21:05:17.531249       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 21:05:17.531303       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0904 21:05:17.550968       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 21:05:17.551008       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0904 21:05:18.545790       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0904 21:05:18.551240       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0904 21:05:18.572271       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0904 21:05:44.476458       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:05:57.186554       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:06:11.454508       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [c5667de904598d16bc7b2fd5cfcd19280dc33b7d377dd608e1fc9961af9c518c] <==
	E0904 21:05:22.616951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 21:05:22.868238       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 21:05:22.869169       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 21:05:23.043656       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 21:05:23.044573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 21:05:26.199907       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 21:05:26.201102       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 21:05:27.339225       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 21:05:27.340147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 21:05:27.655144       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 21:05:27.656186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 21:05:35.147099       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 21:05:35.148050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 21:05:35.279037       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 21:05:35.280049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 21:05:40.039049       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 21:05:40.040017       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 21:05:52.572692       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 21:05:52.573662       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 21:05:54.820934       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 21:05:54.821893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 21:06:05.126321       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 21:06:05.127596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 21:06:24.157741       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 21:06:24.158733       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [f6a9e9c72d6babda359c890098381bd848b231b9b281facb3f3cdc9763aee908] <==
	I0904 20:56:23.263174       1 server_linux.go:53] "Using iptables proxy"
	I0904 20:56:23.846890       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0904 20:56:23.948000       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0904 20:56:23.948116       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0904 20:56:23.948247       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 20:56:24.347256       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0904 20:56:24.347395       1 server_linux.go:132] "Using iptables Proxier"
	I0904 20:56:24.361570       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 20:56:24.362683       1 server.go:527] "Version info" version="v1.34.0"
	I0904 20:56:24.362781       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 20:56:24.364537       1 config.go:200] "Starting service config controller"
	I0904 20:56:24.364555       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0904 20:56:24.364576       1 config.go:106] "Starting endpoint slice config controller"
	I0904 20:56:24.364583       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0904 20:56:24.364619       1 config.go:403] "Starting serviceCIDR config controller"
	I0904 20:56:24.364629       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0904 20:56:24.365511       1 config.go:309] "Starting node config controller"
	I0904 20:56:24.365557       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0904 20:56:24.365570       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0904 20:56:24.465478       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0904 20:56:24.465535       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0904 20:56:24.465550       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [c29c83b9956a13fe199c44a49b15dba2a1c0c21d5ba02c6402f6f23568614412] <==
	E0904 20:56:11.467729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0904 20:56:11.473120       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0904 20:56:11.473221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0904 20:56:11.473483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0904 20:56:11.473678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0904 20:56:11.473762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0904 20:56:11.473851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0904 20:56:11.473905       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0904 20:56:11.473951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0904 20:56:11.474028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0904 20:56:11.474102       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0904 20:56:11.474173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0904 20:56:11.474244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0904 20:56:11.474321       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0904 20:56:11.474380       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0904 20:56:11.474468       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0904 20:56:11.474521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0904 20:56:11.475320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0904 20:56:11.478116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0904 20:56:12.362165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0904 20:56:12.378813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0904 20:56:12.396736       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0904 20:56:12.484405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0904 20:56:12.588679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0904 20:56:15.667534       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 04 21:06:14 addons-049370 kubelet[1676]: E0904 21:06:14.090323    1676 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/beb1078b6c293c4dc8a860b8cfe2f473f984af42b528927d3082ad1ca266f33b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/beb1078b6c293c4dc8a860b8cfe2f473f984af42b528927d3082ad1ca266f33b/diff: no such file or directory, extraDiskErr: <nil>
	Sep 04 21:06:14 addons-049370 kubelet[1676]: E0904 21:06:14.091475    1676 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/313a7db96a4c5fa5ba2f988845f1a4f6a2ee7c67842bf659a24ef2adebc097f4/diff" to get inode usage: stat /var/lib/containers/storage/overlay/313a7db96a4c5fa5ba2f988845f1a4f6a2ee7c67842bf659a24ef2adebc097f4/diff: no such file or directory, extraDiskErr: <nil>
	Sep 04 21:06:14 addons-049370 kubelet[1676]: E0904 21:06:14.091487    1676 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/beb1078b6c293c4dc8a860b8cfe2f473f984af42b528927d3082ad1ca266f33b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/beb1078b6c293c4dc8a860b8cfe2f473f984af42b528927d3082ad1ca266f33b/diff: no such file or directory, extraDiskErr: <nil>
	Sep 04 21:06:14 addons-049370 kubelet[1676]: E0904 21:06:14.092572    1676 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/be4d47f0209b9f82ea35f6c06f9a3dba7f3e91f91cf535510729cd71b83de863/diff" to get inode usage: stat /var/lib/containers/storage/overlay/be4d47f0209b9f82ea35f6c06f9a3dba7f3e91f91cf535510729cd71b83de863/diff: no such file or directory, extraDiskErr: <nil>
	Sep 04 21:06:14 addons-049370 kubelet[1676]: E0904 21:06:14.095818    1676 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/313a7db96a4c5fa5ba2f988845f1a4f6a2ee7c67842bf659a24ef2adebc097f4/diff" to get inode usage: stat /var/lib/containers/storage/overlay/313a7db96a4c5fa5ba2f988845f1a4f6a2ee7c67842bf659a24ef2adebc097f4/diff: no such file or directory, extraDiskErr: <nil>
	Sep 04 21:06:14 addons-049370 kubelet[1676]: E0904 21:06:14.096940    1676 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/be4d47f0209b9f82ea35f6c06f9a3dba7f3e91f91cf535510729cd71b83de863/diff" to get inode usage: stat /var/lib/containers/storage/overlay/be4d47f0209b9f82ea35f6c06f9a3dba7f3e91f91cf535510729cd71b83de863/diff: no such file or directory, extraDiskErr: <nil>
	Sep 04 21:06:14 addons-049370 kubelet[1676]: E0904 21:06:14.105375    1676 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/5248c7fd534c408fbc6c66d4807a6ea18cb2a7563abbaabb239084fbb7ecf783/diff" to get inode usage: stat /var/lib/containers/storage/overlay/5248c7fd534c408fbc6c66d4807a6ea18cb2a7563abbaabb239084fbb7ecf783/diff: no such file or directory, extraDiskErr: <nil>
	Sep 04 21:06:14 addons-049370 kubelet[1676]: E0904 21:06:14.115946    1676 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/5248c7fd534c408fbc6c66d4807a6ea18cb2a7563abbaabb239084fbb7ecf783/diff" to get inode usage: stat /var/lib/containers/storage/overlay/5248c7fd534c408fbc6c66d4807a6ea18cb2a7563abbaabb239084fbb7ecf783/diff: no such file or directory, extraDiskErr: <nil>
	Sep 04 21:06:14 addons-049370 kubelet[1676]: E0904 21:06:14.148765    1676 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/ccfd400ddb7e620372736d70d4f9ee75ac8e2f1f1d31190caccb02f971f7a2a3/diff" to get inode usage: stat /var/lib/containers/storage/overlay/ccfd400ddb7e620372736d70d4f9ee75ac8e2f1f1d31190caccb02f971f7a2a3/diff: no such file or directory, extraDiskErr: <nil>
	Sep 04 21:06:14 addons-049370 kubelet[1676]: E0904 21:06:14.150891    1676 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/389f06ce80587bc2eec0069020366cfd91532699e186b0fef98ffa4a71609af2/diff" to get inode usage: stat /var/lib/containers/storage/overlay/389f06ce80587bc2eec0069020366cfd91532699e186b0fef98ffa4a71609af2/diff: no such file or directory, extraDiskErr: <nil>
	Sep 04 21:06:14 addons-049370 kubelet[1676]: E0904 21:06:14.152004    1676 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a1b2ccdcb8955de740b0b229230ca117637bb801b6b594e6e584d94376c68eb7/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a1b2ccdcb8955de740b0b229230ca117637bb801b6b594e6e584d94376c68eb7/diff: no such file or directory, extraDiskErr: <nil>
	Sep 04 21:06:14 addons-049370 kubelet[1676]: E0904 21:06:14.154136    1676 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/511f1c7798519d2206d544fd13d4beefb9df3948b4fbdcd1af4afb0f24249a94/diff" to get inode usage: stat /var/lib/containers/storage/overlay/511f1c7798519d2206d544fd13d4beefb9df3948b4fbdcd1af4afb0f24249a94/diff: no such file or directory, extraDiskErr: <nil>
	Sep 04 21:06:14 addons-049370 kubelet[1676]: E0904 21:06:14.162796    1676 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/511f1c7798519d2206d544fd13d4beefb9df3948b4fbdcd1af4afb0f24249a94/diff" to get inode usage: stat /var/lib/containers/storage/overlay/511f1c7798519d2206d544fd13d4beefb9df3948b4fbdcd1af4afb0f24249a94/diff: no such file or directory, extraDiskErr: <nil>
	Sep 04 21:06:14 addons-049370 kubelet[1676]: E0904 21:06:14.166923    1676 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a1b2ccdcb8955de740b0b229230ca117637bb801b6b594e6e584d94376c68eb7/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a1b2ccdcb8955de740b0b229230ca117637bb801b6b594e6e584d94376c68eb7/diff: no such file or directory, extraDiskErr: <nil>
	Sep 04 21:06:14 addons-049370 kubelet[1676]: E0904 21:06:14.175670    1676 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/ad37b9a1403f6c16d19404f51a3bffa78d12bccf6c650ef9f4bbca92dbdecf6b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/ad37b9a1403f6c16d19404f51a3bffa78d12bccf6c650ef9f4bbca92dbdecf6b/diff: no such file or directory, extraDiskErr: <nil>
	Sep 04 21:06:14 addons-049370 kubelet[1676]: E0904 21:06:14.177901    1676 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/389f06ce80587bc2eec0069020366cfd91532699e186b0fef98ffa4a71609af2/diff" to get inode usage: stat /var/lib/containers/storage/overlay/389f06ce80587bc2eec0069020366cfd91532699e186b0fef98ffa4a71609af2/diff: no such file or directory, extraDiskErr: <nil>
	Sep 04 21:06:14 addons-049370 kubelet[1676]: E0904 21:06:14.357381    1676 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757019974357147978  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:518983}  inodes_used:{value:206}}"
	Sep 04 21:06:14 addons-049370 kubelet[1676]: E0904 21:06:14.357431    1676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757019974357147978  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:518983}  inodes_used:{value:206}}"
	Sep 04 21:06:15 addons-049370 kubelet[1676]: E0904 21:06:15.979217    1676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="fd6a62c3-3f28-47de-b93e-6a4222d72423"
	Sep 04 21:06:16 addons-049370 kubelet[1676]: E0904 21:06:16.977830    1676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="76e4007b-c8c9-43e1-882d-36f7c6c048cc"
	Sep 04 21:06:21 addons-049370 kubelet[1676]: W0904 21:06:21.532699    1676 logging.go:55] [core] [Channel #71 SubChannel #72]grpc: addrConn.createTransport failed to connect to {Addr: "/var/lib/kubelet/plugins/csi-hostpath/csi.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/lib/kubelet/plugins/csi-hostpath/csi.sock: connect: connection refused"
	Sep 04 21:06:24 addons-049370 kubelet[1676]: E0904 21:06:24.359198    1676 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757019984358994233  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:518983}  inodes_used:{value:206}}"
	Sep 04 21:06:24 addons-049370 kubelet[1676]: E0904 21:06:24.359231    1676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757019984358994233  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:518983}  inodes_used:{value:206}}"
	Sep 04 21:06:28 addons-049370 kubelet[1676]: E0904 21:06:28.978120    1676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="fd6a62c3-3f28-47de-b93e-6a4222d72423"
	Sep 04 21:06:29 addons-049370 kubelet[1676]: E0904 21:06:29.978511    1676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="76e4007b-c8c9-43e1-882d-36f7c6c048cc"
	
	
	==> storage-provisioner [5a078a0cc821dc014bcb985333d5bbfa410ad383f9567686488e54f4bdadf77c] <==
	W0904 21:06:07.212804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:06:09.215504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:06:09.220198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:06:11.223410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:06:11.226966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:06:13.229851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:06:13.234992       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:06:15.237599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:06:15.241490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:06:17.245349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:06:17.250359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:06:19.253147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:06:19.257321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:06:21.260950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:06:21.264598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:06:23.267651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:06:23.272606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:06:25.275932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:06:25.279739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:06:27.282881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:06:27.286768       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:06:29.289483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:06:29.294211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:06:31.297273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:06:31.301306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-049370 -n addons-049370
helpers_test.go:269: (dbg) Run:  kubectl --context addons-049370 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-bcplk ingress-nginx-admission-patch-gtdvl
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-049370 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-bcplk ingress-nginx-admission-patch-gtdvl
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-049370 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-bcplk ingress-nginx-admission-patch-gtdvl: exit status 1 (72.968166ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-049370/192.168.49.2
	Start Time:       Thu, 04 Sep 2025 20:58:29 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.24
	IPs:
	  IP:  10.244.0.24
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6ptm9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6ptm9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  8m3s                  default-scheduler  Successfully assigned default/nginx to addons-049370
	  Warning  Failed     7m31s                 kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m45s (x4 over 8m2s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     60s (x4 over 7m31s)   kubelet            Error: ErrImagePull
	  Warning  Failed     60s (x3 over 5m36s)   kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    4s (x9 over 7m31s)    kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     4s (x9 over 7m31s)    kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-049370/192.168.49.2
	Start Time:       Thu, 04 Sep 2025 20:59:14 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hr2vm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-hr2vm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  7m18s                 default-scheduler  Successfully assigned default/task-pv-pod to addons-049370
	  Warning  Failed     4m4s                  kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:f15190cd0aed34df2541e6a569d349858dd83fe2a519d7c0ec023133b6d3c4f7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    102s (x4 over 7m17s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     30s (x3 over 6m7s)    kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     30s (x4 over 6m7s)    kubelet            Error: ErrImagePull
	  Normal   BackOff    3s (x7 over 6m6s)     kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     3s (x7 over 6m6s)     kubelet            Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hzwmn (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-hzwmn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-bcplk" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-gtdvl" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-049370 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-bcplk ingress-nginx-admission-patch-gtdvl: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-049370 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-049370 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-049370 addons disable ingress --alsologtostderr -v=1: (7.614709993s)
--- FAIL: TestAddons/parallel/Ingress (491.44s)

                                                
                                    
x
+
TestAddons/parallel/CSI (378.98s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0904 20:59:05.361998  388360 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0904 20:59:05.365174  388360 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0904 20:59:05.365200  388360 kapi.go:107] duration metric: took 3.23238ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.243299ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-049370 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-049370 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [76e4007b-c8c9-43e1-882d-36f7c6c048cc] Pending
helpers_test.go:352: "task-pv-pod" [76e4007b-c8c9-43e1-882d-36f7c6c048cc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:337: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:567: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:567: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-049370 -n addons-049370
addons_test.go:567: TestAddons/parallel/CSI: showing logs for failed pods as of 2025-09-04 21:05:14.961535203 +0000 UTC m=+591.805137768
addons_test.go:567: (dbg) Run:  kubectl --context addons-049370 describe po task-pv-pod -n default
addons_test.go:567: (dbg) kubectl --context addons-049370 describe po task-pv-pod -n default:
Name:             task-pv-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-049370/192.168.49.2
Start Time:       Thu, 04 Sep 2025 20:59:14 +0000
Labels:           app=task-pv-pod
Annotations:      <none>
Status:           Pending
IP:               10.244.0.28
IPs:
IP:  10.244.0.28
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hr2vm (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc
ReadOnly:   false
kube-api-access-hr2vm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  6m1s                 default-scheduler  Successfully assigned default/task-pv-pod to addons-049370
Warning  Failed     2m47s                kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:f15190cd0aed34df2541e6a569d349858dd83fe2a519d7c0ec023133b6d3c4f7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     75s (x2 over 4m50s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     75s (x3 over 4m50s)  kubelet            Error: ErrImagePull
Normal   BackOff    38s (x5 over 4m49s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     38s (x5 over 4m49s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    25s (x4 over 6m)     kubelet            Pulling image "docker.io/nginx"
addons_test.go:567: (dbg) Run:  kubectl --context addons-049370 logs task-pv-pod -n default
addons_test.go:567: (dbg) Non-zero exit: kubectl --context addons-049370 logs task-pv-pod -n default: exit status 1 (66.00965ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:567: kubectl --context addons-049370 logs task-pv-pod -n default: exit status 1
addons_test.go:568: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/CSI]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/CSI]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-049370
helpers_test.go:243: (dbg) docker inspect addons-049370:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5caec540cec027b6b9e44289fa39b99a9d281f88769b5a3b10a72fbc2efdcfc3",
	        "Created": "2025-09-04T20:55:59.262503813Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 390267,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-04T20:55:59.29310334Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:26724cb1013292a869503ea8c35fa5a932e6ec3e4e85e318411373ae6a24478b",
	        "ResolvConfPath": "/var/lib/docker/containers/5caec540cec027b6b9e44289fa39b99a9d281f88769b5a3b10a72fbc2efdcfc3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5caec540cec027b6b9e44289fa39b99a9d281f88769b5a3b10a72fbc2efdcfc3/hostname",
	        "HostsPath": "/var/lib/docker/containers/5caec540cec027b6b9e44289fa39b99a9d281f88769b5a3b10a72fbc2efdcfc3/hosts",
	        "LogPath": "/var/lib/docker/containers/5caec540cec027b6b9e44289fa39b99a9d281f88769b5a3b10a72fbc2efdcfc3/5caec540cec027b6b9e44289fa39b99a9d281f88769b5a3b10a72fbc2efdcfc3-json.log",
	        "Name": "/addons-049370",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-049370:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-049370",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5caec540cec027b6b9e44289fa39b99a9d281f88769b5a3b10a72fbc2efdcfc3",
	                "LowerDir": "/var/lib/docker/overlay2/dc05b76b54c7cfbf7fa4d96813c5d335fe5daf2168b1a69554f99d939c584ee0-init/diff:/var/lib/docker/overlay2/fcc29a201369e5fb579bfafdafe90606e5b86bd2f4b52beb7f6a97f7e955214d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dc05b76b54c7cfbf7fa4d96813c5d335fe5daf2168b1a69554f99d939c584ee0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dc05b76b54c7cfbf7fa4d96813c5d335fe5daf2168b1a69554f99d939c584ee0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dc05b76b54c7cfbf7fa4d96813c5d335fe5daf2168b1a69554f99d939c584ee0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-049370",
	                "Source": "/var/lib/docker/volumes/addons-049370/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-049370",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-049370",
	                "name.minikube.sigs.k8s.io": "addons-049370",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ebe38700b80a638159b3489df03c5870e9f15ecf00ad219d1d9b3fbc49acec55",
	            "SandboxKey": "/var/run/docker/netns/ebe38700b80a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-049370": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:41:22:73:0f:f1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2048bdf288b9f197869aef65f41d479e8afce6e3ad28d597acd24bc87d544c41",
	                    "EndpointID": "84d0e0934b5175bdbf5a7fed011cc5c5fd5e6125bf967cd744e715e3f5eb7d74",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-049370",
	                        "5caec540cec0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-049370 -n addons-049370
helpers_test.go:252: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-049370 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-049370 logs -n 25: (1.112411934s)
helpers_test.go:260: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-807406                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-807406   │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │ 04 Sep 25 20:55 UTC │
	│ delete  │ -p download-only-640345                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-640345   │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │ 04 Sep 25 20:55 UTC │
	│ delete  │ -p download-only-807406                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-807406   │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │ 04 Sep 25 20:55 UTC │
	│ start   │ --download-only -p download-docker-306069 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-306069 │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │                     │
	│ delete  │ -p download-docker-306069                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-306069 │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │ 04 Sep 25 20:55 UTC │
	│ start   │ --download-only -p binary-mirror-563304 --alsologtostderr --binary-mirror http://127.0.0.1:41655 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-563304   │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │                     │
	│ delete  │ -p binary-mirror-563304                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-563304   │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │ 04 Sep 25 20:55 UTC │
	│ addons  │ disable dashboard -p addons-049370                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │                     │
	│ addons  │ enable dashboard -p addons-049370                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │                     │
	│ start   │ -p addons-049370 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │ 04 Sep 25 20:58 UTC │
	│ addons  │ addons-049370 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:58 UTC │ 04 Sep 25 20:58 UTC │
	│ addons  │ addons-049370 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:58 UTC │ 04 Sep 25 20:58 UTC │
	│ addons  │ enable headlamp -p addons-049370 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:58 UTC │ 04 Sep 25 20:58 UTC │
	│ addons  │ addons-049370 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:58 UTC │ 04 Sep 25 20:58 UTC │
	│ addons  │ addons-049370 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:58 UTC │ 04 Sep 25 20:58 UTC │
	│ addons  │ addons-049370 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:58 UTC │ 04 Sep 25 20:58 UTC │
	│ addons  │ addons-049370 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:58 UTC │ 04 Sep 25 20:58 UTC │
	│ addons  │ addons-049370 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:58 UTC │ 04 Sep 25 20:59 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-049370                                                                                                                                                                                                                                                                                                                                                                                           │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:59 UTC │ 04 Sep 25 20:59 UTC │
	│ addons  │ addons-049370 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:59 UTC │ 04 Sep 25 20:59 UTC │
	│ ip      │ addons-049370 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:59 UTC │ 04 Sep 25 20:59 UTC │
	│ addons  │ addons-049370 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:59 UTC │ 04 Sep 25 20:59 UTC │
	│ addons  │ addons-049370 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:59 UTC │ 04 Sep 25 20:59 UTC │
	│ addons  │ addons-049370 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:59 UTC │ 04 Sep 25 20:59 UTC │
	│ addons  │ addons-049370 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 21:04 UTC │ 04 Sep 25 21:04 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 20:55:35
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 20:55:35.931187  389648 out.go:360] Setting OutFile to fd 1 ...
	I0904 20:55:35.931440  389648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 20:55:35.931451  389648 out.go:374] Setting ErrFile to fd 2...
	I0904 20:55:35.931458  389648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 20:55:35.931653  389648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-384635/.minikube/bin
	I0904 20:55:35.932252  389648 out.go:368] Setting JSON to false
	I0904 20:55:35.933194  389648 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9485,"bootTime":1757009851,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 20:55:35.933295  389648 start.go:140] virtualization: kvm guest
	I0904 20:55:35.935053  389648 out.go:179] * [addons-049370] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 20:55:35.936502  389648 out.go:179]   - MINIKUBE_LOCATION=21490
	I0904 20:55:35.936515  389648 notify.go:220] Checking for updates...
	I0904 20:55:35.938589  389648 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 20:55:35.939875  389648 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21490-384635/kubeconfig
	I0904 20:55:35.941016  389648 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-384635/.minikube
	I0904 20:55:35.942120  389648 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 20:55:35.943340  389648 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 20:55:35.944678  389648 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 20:55:35.967955  389648 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 20:55:35.968038  389648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 20:55:36.013884  389648 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:48 SystemTime:2025-09-04 20:55:36.00384503 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 20:55:36.013990  389648 docker.go:318] overlay module found
	I0904 20:55:36.015880  389648 out.go:179] * Using the docker driver based on user configuration
	I0904 20:55:36.017259  389648 start.go:304] selected driver: docker
	I0904 20:55:36.017279  389648 start.go:918] validating driver "docker" against <nil>
	I0904 20:55:36.017301  389648 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 20:55:36.018181  389648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 20:55:36.061743  389648 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:48 SystemTime:2025-09-04 20:55:36.053555345 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 20:55:36.061946  389648 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0904 20:55:36.062186  389648 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 20:55:36.063851  389648 out.go:179] * Using Docker driver with root privileges
	I0904 20:55:36.065032  389648 cni.go:84] Creating CNI manager for ""
	I0904 20:55:36.065096  389648 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 20:55:36.065109  389648 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0904 20:55:36.065189  389648 start.go:348] cluster config:
	{Name:addons-049370 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-049370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0904 20:55:36.066545  389648 out.go:179] * Starting "addons-049370" primary control-plane node in "addons-049370" cluster
	I0904 20:55:36.067696  389648 cache.go:123] Beginning downloading kic base image for docker with crio
	I0904 20:55:36.068952  389648 out.go:179] * Pulling base image v0.0.47-1756116447-21413 ...
	I0904 20:55:36.070027  389648 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 20:55:36.070067  389648 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21490-384635/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0904 20:55:36.070084  389648 cache.go:58] Caching tarball of preloaded images
	I0904 20:55:36.070129  389648 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 in local docker daemon
	I0904 20:55:36.070184  389648 preload.go:172] Found /home/jenkins/minikube-integration/21490-384635/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0904 20:55:36.070196  389648 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0904 20:55:36.070509  389648 profile.go:143] Saving config to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/config.json ...
	I0904 20:55:36.070535  389648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/config.json: {Name:mkeaddf16ea076f194194c7e6e0eb8ad847648bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:55:36.085707  389648 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 to local cache
	I0904 20:55:36.085814  389648 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 in local cache directory
	I0904 20:55:36.085830  389648 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 in local cache directory, skipping pull
	I0904 20:55:36.085834  389648 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 exists in cache, skipping pull
	I0904 20:55:36.085841  389648 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 as a tarball
	I0904 20:55:36.085848  389648 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 from local cache
	I0904 20:55:47.569774  389648 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 from cached tarball
	I0904 20:55:47.569822  389648 cache.go:232] Successfully downloaded all kic artifacts
	I0904 20:55:47.569872  389648 start.go:360] acquireMachinesLock for addons-049370: {Name:mk8e52f32278895920c6de02ca736f9f45438008 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 20:55:47.569963  389648 start.go:364] duration metric: took 71.514µs to acquireMachinesLock for "addons-049370"
	I0904 20:55:47.569986  389648 start.go:93] Provisioning new machine with config: &{Name:addons-049370 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-049370 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 20:55:47.570051  389648 start.go:125] createHost starting for "" (driver="docker")
	I0904 20:55:47.571722  389648 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0904 20:55:47.571956  389648 start.go:159] libmachine.API.Create for "addons-049370" (driver="docker")
	I0904 20:55:47.571986  389648 client.go:168] LocalClient.Create starting
	I0904 20:55:47.572093  389648 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem
	I0904 20:55:47.750984  389648 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/cert.pem
	I0904 20:55:47.850792  389648 cli_runner.go:164] Run: docker network inspect addons-049370 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0904 20:55:47.867272  389648 cli_runner.go:211] docker network inspect addons-049370 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0904 20:55:47.867344  389648 network_create.go:284] running [docker network inspect addons-049370] to gather additional debugging logs...
	I0904 20:55:47.867369  389648 cli_runner.go:164] Run: docker network inspect addons-049370
	W0904 20:55:47.882593  389648 cli_runner.go:211] docker network inspect addons-049370 returned with exit code 1
	I0904 20:55:47.882619  389648 network_create.go:287] error running [docker network inspect addons-049370]: docker network inspect addons-049370: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-049370 not found
	I0904 20:55:47.882643  389648 network_create.go:289] output of [docker network inspect addons-049370]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-049370 not found
	
	** /stderr **
	I0904 20:55:47.882767  389648 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0904 20:55:47.897896  389648 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f29240}
	I0904 20:55:47.897941  389648 network_create.go:124] attempt to create docker network addons-049370 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0904 20:55:47.897989  389648 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-049370 addons-049370
	I0904 20:55:47.946511  389648 network_create.go:108] docker network addons-049370 192.168.49.0/24 created
	I0904 20:55:47.946541  389648 kic.go:121] calculated static IP "192.168.49.2" for the "addons-049370" container
	I0904 20:55:47.946616  389648 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0904 20:55:47.961507  389648 cli_runner.go:164] Run: docker volume create addons-049370 --label name.minikube.sigs.k8s.io=addons-049370 --label created_by.minikube.sigs.k8s.io=true
	I0904 20:55:47.977348  389648 oci.go:103] Successfully created a docker volume addons-049370
	I0904 20:55:47.977414  389648 cli_runner.go:164] Run: docker run --rm --name addons-049370-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-049370 --entrypoint /usr/bin/test -v addons-049370:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 -d /var/lib
	I0904 20:55:54.908931  389648 cli_runner.go:217] Completed: docker run --rm --name addons-049370-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-049370 --entrypoint /usr/bin/test -v addons-049370:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 -d /var/lib: (6.931464681s)
	I0904 20:55:54.908963  389648 oci.go:107] Successfully prepared a docker volume addons-049370
	I0904 20:55:54.908988  389648 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 20:55:54.909014  389648 kic.go:194] Starting extracting preloaded images to volume ...
	I0904 20:55:54.909085  389648 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21490-384635/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-049370:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 -I lz4 -xf /preloaded.tar -C /extractDir
	I0904 20:55:59.203486  389648 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21490-384635/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-049370:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 -I lz4 -xf /preloaded.tar -C /extractDir: (4.294349299s)
	I0904 20:55:59.203526  389648 kic.go:203] duration metric: took 4.294508066s to extract preloaded images to volume ...
	W0904 20:55:59.203673  389648 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0904 20:55:59.203816  389648 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0904 20:55:59.248150  389648 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-049370 --name addons-049370 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-049370 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-049370 --network addons-049370 --ip 192.168.49.2 --volume addons-049370:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9
	I0904 20:55:59.483162  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Running}}
	I0904 20:55:59.500560  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:55:59.519189  389648 cli_runner.go:164] Run: docker exec addons-049370 stat /var/lib/dpkg/alternatives/iptables
	I0904 20:55:59.559150  389648 oci.go:144] the created container "addons-049370" has a running status.
	I0904 20:55:59.559182  389648 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa...
	I0904 20:55:59.730819  389648 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0904 20:55:59.749901  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:55:59.769336  389648 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0904 20:55:59.769365  389648 kic_runner.go:114] Args: [docker exec --privileged addons-049370 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0904 20:55:59.858697  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:55:59.878986  389648 machine.go:93] provisionDockerMachine start ...
	I0904 20:55:59.879111  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:55:59.900388  389648 main.go:141] libmachine: Using SSH client type: native
	I0904 20:55:59.900618  389648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33145 <nil> <nil>}
	I0904 20:55:59.900630  389648 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 20:56:00.092134  389648 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-049370
	
	I0904 20:56:00.092166  389648 ubuntu.go:182] provisioning hostname "addons-049370"
	I0904 20:56:00.092222  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:00.110942  389648 main.go:141] libmachine: Using SSH client type: native
	I0904 20:56:00.111171  389648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33145 <nil> <nil>}
	I0904 20:56:00.111192  389648 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-049370 && echo "addons-049370" | sudo tee /etc/hostname
	I0904 20:56:00.235028  389648 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-049370
	
	I0904 20:56:00.235115  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:00.254182  389648 main.go:141] libmachine: Using SSH client type: native
	I0904 20:56:00.254444  389648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33145 <nil> <nil>}
	I0904 20:56:00.254463  389648 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-049370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-049370/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-049370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 20:56:00.364487  389648 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 20:56:00.364528  389648 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21490-384635/.minikube CaCertPath:/home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21490-384635/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21490-384635/.minikube}
	I0904 20:56:00.364564  389648 ubuntu.go:190] setting up certificates
	I0904 20:56:00.364581  389648 provision.go:84] configureAuth start
	I0904 20:56:00.364638  389648 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-049370
	I0904 20:56:00.380933  389648 provision.go:143] copyHostCerts
	I0904 20:56:00.381007  389648 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21490-384635/.minikube/ca.pem (1078 bytes)
	I0904 20:56:00.381110  389648 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21490-384635/.minikube/cert.pem (1123 bytes)
	I0904 20:56:00.381171  389648 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21490-384635/.minikube/key.pem (1675 bytes)
	I0904 20:56:00.381291  389648 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21490-384635/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca-key.pem org=jenkins.addons-049370 san=[127.0.0.1 192.168.49.2 addons-049370 localhost minikube]
	I0904 20:56:00.582774  389648 provision.go:177] copyRemoteCerts
	I0904 20:56:00.582833  389648 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 20:56:00.582888  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:00.600896  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:00.685189  389648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0904 20:56:00.706872  389648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0904 20:56:00.727318  389648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0904 20:56:00.747581  389648 provision.go:87] duration metric: took 382.988372ms to configureAuth
	I0904 20:56:00.747609  389648 ubuntu.go:206] setting minikube options for container-runtime
	I0904 20:56:00.747766  389648 config.go:182] Loaded profile config "addons-049370": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 20:56:00.747906  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:00.764149  389648 main.go:141] libmachine: Using SSH client type: native
	I0904 20:56:00.764350  389648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33145 <nil> <nil>}
	I0904 20:56:00.764368  389648 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0904 20:56:00.958932  389648 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0904 20:56:00.958968  389648 machine.go:96] duration metric: took 1.079954584s to provisionDockerMachine
	I0904 20:56:00.958982  389648 client.go:171] duration metric: took 13.386987071s to LocalClient.Create
	I0904 20:56:00.959009  389648 start.go:167] duration metric: took 13.387053802s to libmachine.API.Create "addons-049370"
	I0904 20:56:00.959025  389648 start.go:293] postStartSetup for "addons-049370" (driver="docker")
	I0904 20:56:00.959040  389648 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 20:56:00.959109  389648 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 20:56:00.959158  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:00.975608  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:01.061278  389648 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 20:56:01.064210  389648 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0904 20:56:01.064237  389648 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0904 20:56:01.064244  389648 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0904 20:56:01.064251  389648 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0904 20:56:01.064263  389648 filesync.go:126] Scanning /home/jenkins/minikube-integration/21490-384635/.minikube/addons for local assets ...
	I0904 20:56:01.064321  389648 filesync.go:126] Scanning /home/jenkins/minikube-integration/21490-384635/.minikube/files for local assets ...
	I0904 20:56:01.064347  389648 start.go:296] duration metric: took 105.314476ms for postStartSetup
	I0904 20:56:01.064647  389648 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-049370
	I0904 20:56:01.081390  389648 profile.go:143] Saving config to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/config.json ...
	I0904 20:56:01.081619  389648 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 20:56:01.081659  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:01.098242  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:01.177520  389648 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0904 20:56:01.181443  389648 start.go:128] duration metric: took 13.611378177s to createHost
	I0904 20:56:01.181464  389648 start.go:83] releasing machines lock for "addons-049370", held for 13.611489751s
	I0904 20:56:01.181518  389648 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-049370
	I0904 20:56:01.197665  389648 ssh_runner.go:195] Run: cat /version.json
	I0904 20:56:01.197712  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:01.197747  389648 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 20:56:01.197832  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:01.217406  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:01.217960  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:01.369596  389648 ssh_runner.go:195] Run: systemctl --version
	I0904 20:56:01.373474  389648 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0904 20:56:01.509565  389648 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0904 20:56:01.513834  389648 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 20:56:01.530180  389648 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0904 20:56:01.530256  389648 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 20:56:01.553751  389648 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0904 20:56:01.553778  389648 start.go:495] detecting cgroup driver to use...
	I0904 20:56:01.553812  389648 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0904 20:56:01.553868  389648 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 20:56:01.567182  389648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 20:56:01.576378  389648 docker.go:218] disabling cri-docker service (if available) ...
	I0904 20:56:01.576432  389648 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0904 20:56:01.587988  389648 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0904 20:56:01.599829  389648 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0904 20:56:01.673115  389648 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0904 20:56:01.753644  389648 docker.go:234] disabling docker service ...
	I0904 20:56:01.753708  389648 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0904 20:56:01.770449  389648 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0904 20:56:01.780079  389648 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0904 20:56:01.852634  389648 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0904 20:56:01.929656  389648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 20:56:01.939388  389648 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 20:56:01.953483  389648 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0904 20:56:01.953533  389648 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:56:01.961514  389648 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0904 20:56:01.961581  389648 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:56:01.969587  389648 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:56:01.977328  389648 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:56:01.985460  389648 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 20:56:01.992893  389648 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:56:02.000897  389648 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:56:02.014229  389648 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:56:02.022636  389648 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 20:56:02.029801  389648 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 20:56:02.036815  389648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 20:56:02.107470  389648 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0904 20:56:02.204181  389648 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0904 20:56:02.204269  389648 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0904 20:56:02.207556  389648 start.go:563] Will wait 60s for crictl version
	I0904 20:56:02.207613  389648 ssh_runner.go:195] Run: which crictl
	I0904 20:56:02.210531  389648 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 20:56:02.242395  389648 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0904 20:56:02.242466  389648 ssh_runner.go:195] Run: crio --version
	I0904 20:56:02.275988  389648 ssh_runner.go:195] Run: crio --version
	I0904 20:56:02.310411  389648 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0904 20:56:02.311905  389648 cli_runner.go:164] Run: docker network inspect addons-049370 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0904 20:56:02.327725  389648 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0904 20:56:02.331056  389648 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 20:56:02.340959  389648 kubeadm.go:875] updating cluster {Name:addons-049370 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-049370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0904 20:56:02.341073  389648 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 20:56:02.341116  389648 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 20:56:02.405091  389648 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 20:56:02.405113  389648 crio.go:433] Images already preloaded, skipping extraction
	I0904 20:56:02.405157  389648 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 20:56:02.435602  389648 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 20:56:02.435624  389648 cache_images.go:85] Images are preloaded, skipping loading
	I0904 20:56:02.435633  389648 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0904 20:56:02.435742  389648 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-049370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-049370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0904 20:56:02.435801  389648 ssh_runner.go:195] Run: crio config
	I0904 20:56:02.475208  389648 cni.go:84] Creating CNI manager for ""
	I0904 20:56:02.475229  389648 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 20:56:02.475242  389648 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0904 20:56:02.475263  389648 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-049370 NodeName:addons-049370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0904 20:56:02.475385  389648 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-049370"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0904 20:56:02.475439  389648 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0904 20:56:02.483384  389648 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 20:56:02.483434  389648 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0904 20:56:02.490999  389648 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0904 20:56:02.506097  389648 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 20:56:02.521263  389648 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I0904 20:56:02.536086  389648 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0904 20:56:02.539041  389648 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 20:56:02.548083  389648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 20:56:02.620733  389648 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 20:56:02.632098  389648 certs.go:68] Setting up /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370 for IP: 192.168.49.2
	I0904 20:56:02.632134  389648 certs.go:194] generating shared ca certs ...
	I0904 20:56:02.632155  389648 certs.go:226] acquiring lock for ca certs: {Name:mkab35eaf89c739a644dd45428dbbd1b30c489d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:02.632303  389648 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21490-384635/.minikube/ca.key
	I0904 20:56:02.772055  389648 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21490-384635/.minikube/ca.crt ...
	I0904 20:56:02.772085  389648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/ca.crt: {Name:mk404ac6f8708b208ba3c17564d32d1c6e1f2d80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:02.772267  389648 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21490-384635/.minikube/ca.key ...
	I0904 20:56:02.772279  389648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/ca.key: {Name:mk0f029ece1be42b4490f030d22d0963e0de5ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:02.772354  389648 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21490-384635/.minikube/proxy-client-ca.key
	I0904 20:56:03.010123  389648 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21490-384635/.minikube/proxy-client-ca.crt ...
	I0904 20:56:03.010158  389648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/proxy-client-ca.crt: {Name:mk7836ca5bbc78d58e9f795ae3bd0cc1b3f94116 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:03.010336  389648 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21490-384635/.minikube/proxy-client-ca.key ...
	I0904 20:56:03.010350  389648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/proxy-client-ca.key: {Name:mk4a37f8d0fc0b197f0796089f579493b4ab1519 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:03.010419  389648 certs.go:256] generating profile certs ...
	I0904 20:56:03.010492  389648 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.key
	I0904 20:56:03.010508  389648 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt with IP's: []
	I0904 20:56:03.189084  389648 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt ...
	I0904 20:56:03.189116  389648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt: {Name:mkd7ec52fc00b41923df1429201e9537ed50a6ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:03.189278  389648 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.key ...
	I0904 20:56:03.189288  389648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.key: {Name:mk02506672d1abc668baddf35412038560ece7f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:03.189360  389648 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/apiserver.key.b5f53ae8
	I0904 20:56:03.189379  389648 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/apiserver.crt.b5f53ae8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0904 20:56:03.499646  389648 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/apiserver.crt.b5f53ae8 ...
	I0904 20:56:03.499681  389648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/apiserver.crt.b5f53ae8: {Name:mk8c9ae053706a4ea8f20f5fd17de3c20f5c4e30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:03.499842  389648 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/apiserver.key.b5f53ae8 ...
	I0904 20:56:03.499857  389648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/apiserver.key.b5f53ae8: {Name:mk9c5b0ad197ad61ad1f2b3b99dfc9c995bc0acb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:03.499927  389648 certs.go:381] copying /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/apiserver.crt.b5f53ae8 -> /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/apiserver.crt
	I0904 20:56:03.500017  389648 certs.go:385] copying /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/apiserver.key.b5f53ae8 -> /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/apiserver.key
	I0904 20:56:03.500063  389648 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/proxy-client.key
	I0904 20:56:03.500080  389648 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/proxy-client.crt with IP's: []
	I0904 20:56:04.206716  389648 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/proxy-client.crt ...
	I0904 20:56:04.206749  389648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/proxy-client.crt: {Name:mk2210684251083ae7ccb41ecbd3350906b53776 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:04.206912  389648 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/proxy-client.key ...
	I0904 20:56:04.206925  389648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/proxy-client.key: {Name:mk24ebbc3c1cb4ca4f1f7bb1a93ec6d982e6058d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:04.207093  389648 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca-key.pem (1679 bytes)
	I0904 20:56:04.207128  389648 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem (1078 bytes)
	I0904 20:56:04.207156  389648 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/cert.pem (1123 bytes)
	I0904 20:56:04.207178  389648 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/key.pem (1675 bytes)
	I0904 20:56:04.207825  389648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 20:56:04.229255  389648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0904 20:56:04.249412  389648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 20:56:04.269463  389648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0904 20:56:04.289100  389648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0904 20:56:04.309546  389648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0904 20:56:04.330101  389648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 20:56:04.350231  389648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0904 20:56:04.370529  389648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 20:56:04.390259  389648 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0904 20:56:04.404879  389648 ssh_runner.go:195] Run: openssl version
	I0904 20:56:04.409558  389648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 20:56:04.417330  389648 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 20:56:04.420173  389648 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  4 20:56 /usr/share/ca-certificates/minikubeCA.pem
	I0904 20:56:04.420213  389648 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 20:56:04.426284  389648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 20:56:04.434253  389648 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 20:56:04.437015  389648 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0904 20:56:04.437090  389648 kubeadm.go:392] StartCluster: {Name:addons-049370 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-049370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 20:56:04.437155  389648 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0904 20:56:04.437197  389648 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0904 20:56:04.468884  389648 cri.go:89] found id: ""
	I0904 20:56:04.468950  389648 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0904 20:56:04.476436  389648 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0904 20:56:04.483832  389648 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0904 20:56:04.483872  389648 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0904 20:56:04.491177  389648 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0904 20:56:04.491196  389648 kubeadm.go:157] found existing configuration files:
	
	I0904 20:56:04.491247  389648 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0904 20:56:04.498385  389648 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0904 20:56:04.498431  389648 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0904 20:56:04.505641  389648 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0904 20:56:04.512961  389648 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0904 20:56:04.512996  389648 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0904 20:56:04.519960  389648 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0904 20:56:04.527106  389648 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0904 20:56:04.527145  389648 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0904 20:56:04.534344  389648 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0904 20:56:04.541535  389648 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0904 20:56:04.541584  389648 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0904 20:56:04.548873  389648 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0904 20:56:04.583125  389648 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0904 20:56:04.583201  389648 kubeadm.go:310] [preflight] Running pre-flight checks
	I0904 20:56:04.597681  389648 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0904 20:56:04.597741  389648 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1083-gcp
	I0904 20:56:04.597803  389648 kubeadm.go:310] OS: Linux
	I0904 20:56:04.597915  389648 kubeadm.go:310] CGROUPS_CPU: enabled
	I0904 20:56:04.597990  389648 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0904 20:56:04.598061  389648 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0904 20:56:04.598158  389648 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0904 20:56:04.598223  389648 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0904 20:56:04.598271  389648 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0904 20:56:04.598336  389648 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0904 20:56:04.598406  389648 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0904 20:56:04.598474  389648 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0904 20:56:04.647143  389648 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0904 20:56:04.647322  389648 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0904 20:56:04.647453  389648 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0904 20:56:04.653687  389648 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0904 20:56:04.656516  389648 out.go:252]   - Generating certificates and keys ...
	I0904 20:56:04.656617  389648 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0904 20:56:04.656693  389648 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0904 20:56:04.868159  389648 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0904 20:56:05.089300  389648 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0904 20:56:05.307580  389648 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0904 20:56:05.541675  389648 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0904 20:56:05.660773  389648 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0904 20:56:05.660952  389648 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-049370 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0904 20:56:05.874335  389648 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0904 20:56:05.874525  389648 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-049370 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0904 20:56:06.201674  389648 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0904 20:56:06.395227  389648 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0904 20:56:06.658231  389648 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0904 20:56:06.658358  389648 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0904 20:56:06.844487  389648 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0904 20:56:07.298671  389648 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0904 20:56:07.543710  389648 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0904 20:56:07.923783  389648 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0904 20:56:08.223748  389648 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0904 20:56:08.224259  389648 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0904 20:56:08.226815  389648 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0904 20:56:08.228639  389648 out.go:252]   - Booting up control plane ...
	I0904 20:56:08.228790  389648 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0904 20:56:08.228909  389648 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0904 20:56:08.228988  389648 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0904 20:56:08.237068  389648 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0904 20:56:08.237206  389648 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0904 20:56:08.242388  389648 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0904 20:56:08.242635  389648 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0904 20:56:08.242706  389648 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0904 20:56:08.316793  389648 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0904 20:56:08.316922  389648 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0904 20:56:08.818465  389648 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.780617ms
	I0904 20:56:08.822350  389648 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0904 20:56:08.822466  389648 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0904 20:56:08.822584  389648 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0904 20:56:08.822692  389648 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0904 20:56:10.827725  389648 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.005237267s
	I0904 20:56:11.470833  389648 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.648459396s
	I0904 20:56:13.324669  389648 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.502233446s
	I0904 20:56:13.335088  389648 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0904 20:56:13.344120  389648 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0904 20:56:13.351749  389648 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0904 20:56:13.351978  389648 kubeadm.go:310] [mark-control-plane] Marking the node addons-049370 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0904 20:56:13.359295  389648 kubeadm.go:310] [bootstrap-token] Using token: 2wn3c0.ojgacqfx8o0hgs3z
	I0904 20:56:13.360520  389648 out.go:252]   - Configuring RBAC rules ...
	I0904 20:56:13.360674  389648 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0904 20:56:13.363353  389648 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0904 20:56:13.367752  389648 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0904 20:56:13.369941  389648 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0904 20:56:13.372028  389648 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0904 20:56:13.375032  389648 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0904 20:56:13.729580  389648 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0904 20:56:14.144230  389648 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0904 20:56:14.730781  389648 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0904 20:56:14.731685  389648 kubeadm.go:310] 
	I0904 20:56:14.731789  389648 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0904 20:56:14.731799  389648 kubeadm.go:310] 
	I0904 20:56:14.731900  389648 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0904 20:56:14.731934  389648 kubeadm.go:310] 
	I0904 20:56:14.731997  389648 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0904 20:56:14.732055  389648 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0904 20:56:14.732151  389648 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0904 20:56:14.732161  389648 kubeadm.go:310] 
	I0904 20:56:14.732233  389648 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0904 20:56:14.732242  389648 kubeadm.go:310] 
	I0904 20:56:14.732312  389648 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0904 20:56:14.732321  389648 kubeadm.go:310] 
	I0904 20:56:14.732378  389648 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0904 20:56:14.732445  389648 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0904 20:56:14.732534  389648 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0904 20:56:14.732544  389648 kubeadm.go:310] 
	I0904 20:56:14.732650  389648 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0904 20:56:14.732787  389648 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0904 20:56:14.732801  389648 kubeadm.go:310] 
	I0904 20:56:14.732903  389648 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2wn3c0.ojgacqfx8o0hgs3z \
	I0904 20:56:14.733021  389648 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:27309edecc409b6b2cac280161d86abdcb5cc9f2633f123f165aab1e20ce61c4 \
	I0904 20:56:14.733052  389648 kubeadm.go:310] 	--control-plane 
	I0904 20:56:14.733062  389648 kubeadm.go:310] 
	I0904 20:56:14.733161  389648 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0904 20:56:14.733169  389648 kubeadm.go:310] 
	I0904 20:56:14.733281  389648 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2wn3c0.ojgacqfx8o0hgs3z \
	I0904 20:56:14.733409  389648 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:27309edecc409b6b2cac280161d86abdcb5cc9f2633f123f165aab1e20ce61c4 
	I0904 20:56:14.735269  389648 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0904 20:56:14.735560  389648 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
	I0904 20:56:14.735715  389648 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0904 20:56:14.735757  389648 cni.go:84] Creating CNI manager for ""
	I0904 20:56:14.735771  389648 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 20:56:14.737265  389648 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0904 20:56:14.738354  389648 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0904 20:56:14.741948  389648 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0904 20:56:14.741966  389648 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0904 20:56:14.758407  389648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0904 20:56:14.949539  389648 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0904 20:56:14.949629  389648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:14.949645  389648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-049370 minikube.k8s.io/updated_at=2025_09_04T20_56_14_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=d82f852837f628b3930700b19196c39855cd258a minikube.k8s.io/name=addons-049370 minikube.k8s.io/primary=true
	I0904 20:56:14.957282  389648 ops.go:34] apiserver oom_adj: -16
	I0904 20:56:15.056202  389648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:15.556268  389648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:16.056217  389648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:16.557001  389648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:17.057153  389648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:17.556713  389648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:18.057056  389648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:18.556307  389648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:19.057162  389648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:19.120633  389648 kubeadm.go:1105] duration metric: took 4.171070637s to wait for elevateKubeSystemPrivileges
	I0904 20:56:19.120676  389648 kubeadm.go:394] duration metric: took 14.683591745s to StartCluster
	I0904 20:56:19.120715  389648 settings.go:142] acquiring lock: {Name:mke06342cfb6705345a5c7324f763dc44aea4569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:19.120870  389648 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21490-384635/kubeconfig
	I0904 20:56:19.121542  389648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/kubeconfig: {Name:mk6b311573f3fade9cba8f894d5c9f5ca76d1e25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:19.121797  389648 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0904 20:56:19.121845  389648 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 20:56:19.121892  389648 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0904 20:56:19.122079  389648 config.go:182] Loaded profile config "addons-049370": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 20:56:19.122530  389648 addons.go:69] Setting inspektor-gadget=true in profile "addons-049370"
	I0904 20:56:19.122543  389648 addons.go:69] Setting yakd=true in profile "addons-049370"
	I0904 20:56:19.122568  389648 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-049370"
	I0904 20:56:19.122574  389648 addons.go:69] Setting registry-creds=true in profile "addons-049370"
	I0904 20:56:19.122584  389648 addons.go:238] Setting addon yakd=true in "addons-049370"
	I0904 20:56:19.122588  389648 addons.go:69] Setting metrics-server=true in profile "addons-049370"
	I0904 20:56:19.122595  389648 addons.go:238] Setting addon registry-creds=true in "addons-049370"
	I0904 20:56:19.122597  389648 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-049370"
	I0904 20:56:19.122606  389648 addons.go:238] Setting addon metrics-server=true in "addons-049370"
	I0904 20:56:19.122631  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.122635  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.122637  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.122574  389648 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-049370"
	I0904 20:56:19.122615  389648 addons.go:69] Setting registry=true in profile "addons-049370"
	I0904 20:56:19.122683  389648 addons.go:69] Setting cloud-spanner=true in profile "addons-049370"
	I0904 20:56:19.122665  389648 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-049370"
	I0904 20:56:19.122703  389648 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-049370"
	I0904 20:56:19.122729  389648 addons.go:238] Setting addon registry=true in "addons-049370"
	I0904 20:56:19.122730  389648 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-049370"
	I0904 20:56:19.122740  389648 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-049370"
	I0904 20:56:19.122757  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.122781  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.123155  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.123184  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.123217  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.122637  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.123265  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.123272  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.123605  389648 addons.go:69] Setting storage-provisioner=true in profile "addons-049370"
	I0904 20:56:19.123629  389648 addons.go:238] Setting addon storage-provisioner=true in "addons-049370"
	I0904 20:56:19.123657  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.123677  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.123784  389648 addons.go:69] Setting volumesnapshots=true in profile "addons-049370"
	I0904 20:56:19.123801  389648 addons.go:238] Setting addon volumesnapshots=true in "addons-049370"
	I0904 20:56:19.123826  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.124143  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.122696  389648 addons.go:238] Setting addon cloud-spanner=true in "addons-049370"
	I0904 20:56:19.124582  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.124795  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.123219  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.125090  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.125204  389648 addons.go:69] Setting gcp-auth=true in profile "addons-049370"
	I0904 20:56:19.126262  389648 mustload.go:65] Loading cluster: addons-049370
	I0904 20:56:19.126543  389648 config.go:182] Loaded profile config "addons-049370": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 20:56:19.126863  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.122674  389648 addons.go:69] Setting default-storageclass=true in profile "addons-049370"
	I0904 20:56:19.130409  389648 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-049370"
	I0904 20:56:19.130766  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.122663  389648 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-049370"
	I0904 20:56:19.132380  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.132897  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.160874  389648 out.go:179] * Verifying Kubernetes components...
	I0904 20:56:19.122562  389648 addons.go:238] Setting addon inspektor-gadget=true in "addons-049370"
	I0904 20:56:19.161079  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.161765  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.163225  389648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 20:56:19.125353  389648 addons.go:69] Setting ingress=true in profile "addons-049370"
	I0904 20:56:19.164437  389648 addons.go:238] Setting addon ingress=true in "addons-049370"
	I0904 20:56:19.164483  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.164897  389648 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0904 20:56:19.166432  389648 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0904 20:56:19.165219  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.167756  389648 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0904 20:56:19.168483  389648 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0904 20:56:19.168508  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0904 20:56:19.168567  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.168620  389648 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0904 20:56:19.168633  389648 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0904 20:56:19.168672  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.170255  389648 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0904 20:56:19.170500  389648 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-049370"
	I0904 20:56:19.170541  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.165299  389648 addons.go:69] Setting volcano=true in profile "addons-049370"
	I0904 20:56:19.170598  389648 addons.go:238] Setting addon volcano=true in "addons-049370"
	I0904 20:56:19.170662  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.125367  389648 addons.go:69] Setting ingress-dns=true in profile "addons-049370"
	I0904 20:56:19.170703  389648 addons.go:238] Setting addon ingress-dns=true in "addons-049370"
	I0904 20:56:19.170745  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.171072  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.171559  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.171696  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.173941  389648 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0904 20:56:19.174145  389648 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0904 20:56:19.175294  389648 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0904 20:56:19.175317  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0904 20:56:19.175370  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.176359  389648 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0904 20:56:19.177473  389648 out.go:179]   - Using image docker.io/registry:3.0.0
	I0904 20:56:19.178644  389648 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0904 20:56:19.179797  389648 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0904 20:56:19.184781  389648 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0904 20:56:19.185493  389648 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0904 20:56:19.185566  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0904 20:56:19.185663  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.193293  389648 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0904 20:56:19.193281  389648 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0904 20:56:19.193365  389648 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0904 20:56:19.193325  389648 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 20:56:19.194473  389648 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0904 20:56:19.194494  389648 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0904 20:56:19.194572  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.195252  389648 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0904 20:56:19.195290  389648 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0904 20:56:19.195358  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.195374  389648 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 20:56:19.195397  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0904 20:56:19.195449  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.195584  389648 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0904 20:56:19.196553  389648 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0904 20:56:19.196568  389648 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0904 20:56:19.196639  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	W0904 20:56:19.205941  389648 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0904 20:56:19.216885  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.234344  389648 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.40
	I0904 20:56:19.234475  389648 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.0
	I0904 20:56:19.236096  389648 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0904 20:56:19.236117  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0904 20:56:19.236181  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.236410  389648 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0904 20:56:19.236424  389648 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0904 20:56:19.236486  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.238985  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.249090  389648 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0904 20:56:19.250463  389648 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0904 20:56:19.250482  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0904 20:56:19.250581  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.251306  389648 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0904 20:56:19.252687  389648 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0904 20:56:19.252707  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0904 20:56:19.252773  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.253251  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.253990  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.275865  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.276415  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.286380  389648 addons.go:238] Setting addon default-storageclass=true in "addons-049370"
	I0904 20:56:19.286427  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.286470  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.286751  389648 out.go:179]   - Using image docker.io/busybox:stable
	I0904 20:56:19.286808  389648 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0904 20:56:19.286911  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.289833  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.290415  389648 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0904 20:56:19.290520  389648 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0904 20:56:19.290861  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.291698  389648 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0904 20:56:19.291722  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0904 20:56:19.291783  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.294160  389648 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0904 20:56:19.298343  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.298945  389648 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0904 20:56:19.298968  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0904 20:56:19.299026  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.302411  389648 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0904 20:56:19.306360  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.309871  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.312171  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.319635  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.321085  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.321297  389648 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0904 20:56:19.321320  389648 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0904 20:56:19.321378  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.337083  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	W0904 20:56:19.349311  389648 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0904 20:56:19.349347  389648 retry.go:31] will retry after 269.872023ms: ssh: handshake failed: EOF
	W0904 20:56:19.349375  389648 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0904 20:56:19.349384  389648 retry.go:31] will retry after 359.531202ms: ssh: handshake failed: EOF
	I0904 20:56:19.548037  389648 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 20:56:19.652723  389648 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0904 20:56:19.652769  389648 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0904 20:56:19.663141  389648 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0904 20:56:19.663174  389648 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0904 20:56:19.746376  389648 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0904 20:56:19.746406  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0904 20:56:19.746783  389648 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0904 20:56:19.746802  389648 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0904 20:56:19.751531  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0904 20:56:19.756963  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 20:56:19.767028  389648 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0904 20:56:19.767122  389648 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0904 20:56:19.861846  389648 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0904 20:56:19.861944  389648 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0904 20:56:19.946528  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0904 20:56:19.947053  389648 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:19.947078  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0904 20:56:19.955391  389648 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0904 20:56:19.955469  389648 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0904 20:56:19.959896  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0904 20:56:19.964131  389648 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0904 20:56:19.964187  389648 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0904 20:56:19.966688  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0904 20:56:19.967554  389648 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0904 20:56:19.967597  389648 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0904 20:56:19.969099  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0904 20:56:19.970420  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0904 20:56:20.047283  389648 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0904 20:56:20.047381  389648 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0904 20:56:20.054133  389648 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0904 20:56:20.054222  389648 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0904 20:56:20.255401  389648 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0904 20:56:20.255496  389648 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0904 20:56:20.266289  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:20.268923  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0904 20:56:20.345501  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0904 20:56:20.348902  389648 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0904 20:56:20.348951  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0904 20:56:20.349097  389648 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0904 20:56:20.349114  389648 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0904 20:56:20.448730  389648 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0904 20:56:20.448833  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0904 20:56:20.564135  389648 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0904 20:56:20.564226  389648 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0904 20:56:20.751518  389648 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.44906046s)
	I0904 20:56:20.751627  389648 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0904 20:56:20.751853  389648 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.203672375s)
	I0904 20:56:20.754155  389648 node_ready.go:35] waiting up to 6m0s for node "addons-049370" to be "Ready" ...
	I0904 20:56:20.761736  389648 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0904 20:56:20.761796  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0904 20:56:20.846606  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0904 20:56:20.856051  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0904 20:56:20.866698  389648 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0904 20:56:20.866814  389648 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0904 20:56:21.145379  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0904 20:56:21.350361  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.598747473s)
	I0904 20:56:21.367474  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0904 20:56:21.448274  389648 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0904 20:56:21.448385  389648 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0904 20:56:21.655407  389648 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-049370" context rescaled to 1 replicas
	I0904 20:56:21.846590  389648 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0904 20:56:21.846680  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0904 20:56:22.161088  389648 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0904 20:56:22.161184  389648 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0904 20:56:22.558322  389648 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0904 20:56:22.558416  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0904 20:56:22.757443  389648 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0904 20:56:22.757535  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	W0904 20:56:22.854862  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:23.062691  389648 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0904 20:56:23.062785  389648 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0904 20:56:23.546535  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0904 20:56:23.864177  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.107174607s)
	I0904 20:56:24.150368  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.203792982s)
	I0904 20:56:24.150762  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.190798522s)
	I0904 20:56:24.150841  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.184107525s)
	I0904 20:56:24.150883  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.181727834s)
	I0904 20:56:24.150921  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.180454067s)
	I0904 20:56:24.153577  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.887255095s)
	W0904 20:56:24.153617  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:24.153648  389648 retry.go:31] will retry after 274.263741ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0904 20:56:24.255664  389648 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0904 20:56:24.428253  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:25.145429  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.799802302s)
	I0904 20:56:25.145480  389648 addons.go:479] Verifying addon ingress=true in "addons-049370"
	I0904 20:56:25.145982  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.299284675s)
	I0904 20:56:25.146015  389648 addons.go:479] Verifying addon registry=true in "addons-049370"
	I0904 20:56:25.146076  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.289922439s)
	I0904 20:56:25.146132  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.876927435s)
	I0904 20:56:25.146167  389648 addons.go:479] Verifying addon metrics-server=true in "addons-049370"
	I0904 20:56:25.146241  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.000773212s)
	I0904 20:56:25.147285  389648 out.go:179] * Verifying registry addon...
	I0904 20:56:25.147335  389648 out.go:179] * Verifying ingress addon...
	I0904 20:56:25.148139  389648 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-049370 service yakd-dashboard -n yakd-dashboard
	
	I0904 20:56:25.149773  389648 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0904 20:56:25.149773  389648 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0904 20:56:25.162307  389648 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0904 20:56:25.162382  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:25.162833  389648 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0904 20:56:25.162892  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:25.256839  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:25.653521  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:25.653811  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:26.153386  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:26.153683  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:26.355221  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.987641855s)
	W0904 20:56:26.355277  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0904 20:56:26.355305  389648 retry.go:31] will retry after 260.638152ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0904 20:56:26.355424  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.808790402s)
	I0904 20:56:26.355454  389648 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-049370"
	I0904 20:56:26.356999  389648 out.go:179] * Verifying csi-hostpath-driver addon...
	I0904 20:56:26.359335  389648 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0904 20:56:26.364572  389648 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0904 20:56:26.364592  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:26.415311  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.987009875s)
	W0904 20:56:26.415355  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:26.415375  389648 retry.go:31] will retry after 295.761583ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:26.616984  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0904 20:56:26.653507  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:26.653558  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:26.711551  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:26.849469  389648 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0904 20:56:26.849544  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:26.862656  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:26.874207  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:26.978097  389648 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0904 20:56:26.994974  389648 addons.go:238] Setting addon gcp-auth=true in "addons-049370"
	I0904 20:56:26.995024  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:26.995376  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:27.012374  389648 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0904 20:56:27.012428  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:27.028863  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:27.152149  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:27.152264  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:27.362370  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:27.653212  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:27.653402  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:27.758106  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:27.863000  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:28.153378  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:28.153490  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:28.363340  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:28.653066  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:28.653239  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:28.861982  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:29.092107  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.475068234s)
	I0904 20:56:29.092190  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.380598109s)
	I0904 20:56:29.092219  389648 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.079820201s)
	W0904 20:56:29.092237  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:29.092263  389648 retry.go:31] will retry after 502.484223ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:29.093894  389648 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0904 20:56:29.095483  389648 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0904 20:56:29.096510  389648 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0904 20:56:29.096529  389648 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0904 20:56:29.112631  389648 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0904 20:56:29.112663  389648 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0904 20:56:29.128018  389648 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0904 20:56:29.128036  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0904 20:56:29.143020  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0904 20:56:29.153882  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:29.154123  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:29.362692  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:29.454170  389648 addons.go:479] Verifying addon gcp-auth=true in "addons-049370"
	I0904 20:56:29.455515  389648 out.go:179] * Verifying gcp-auth addon...
	I0904 20:56:29.457417  389648 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0904 20:56:29.459571  389648 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0904 20:56:29.459590  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:29.595708  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:29.652683  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:29.652827  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:29.862029  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:29.960159  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 20:56:30.114851  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:30.114881  389648 retry.go:31] will retry after 693.179023ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:30.152713  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:30.152863  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:30.257051  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:30.362609  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:30.460179  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:30.652858  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:30.652980  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:30.808239  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:30.863242  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:30.961106  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:31.154171  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:31.154231  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:31.322382  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:31.322416  389648 retry.go:31] will retry after 1.197657s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:31.362659  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:31.459971  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:31.652462  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:31.652562  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:31.862315  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:31.960600  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:32.153504  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:32.153604  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:32.362298  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:32.460616  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:32.520713  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:32.652511  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:32.652595  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:32.760458  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:32.863634  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:32.959731  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 20:56:33.040841  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:33.040881  389648 retry.go:31] will retry after 2.457515415s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:33.152726  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:33.152743  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:33.362502  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:33.460284  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:33.652934  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:33.653038  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:33.862246  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:33.960818  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:34.153166  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:34.153280  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:34.362100  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:34.460789  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:34.653810  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:34.653810  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:34.861972  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:34.960530  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:35.153325  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:35.153406  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:35.257683  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:35.362424  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:35.460858  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:35.499007  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:35.653242  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:35.653299  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:35.861724  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:35.959645  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 20:56:36.016874  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:36.016905  389648 retry.go:31] will retry after 3.533514487s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:36.152675  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:36.152869  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:36.362591  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:36.459815  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:36.652244  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:36.652298  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:36.862251  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:36.960712  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:37.153481  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:37.153520  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:37.362437  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:37.460789  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:37.652357  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:37.652379  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:37.756527  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:37.862037  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:37.960447  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:38.153502  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:38.153539  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:38.362816  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:38.460210  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:38.652903  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:38.653135  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:38.862007  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:38.960650  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:39.153578  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:39.153774  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:39.361972  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:39.460461  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:39.551574  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:39.653495  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:39.653650  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:39.757372  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:39.862832  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:39.960361  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 20:56:40.069853  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:40.069886  389648 retry.go:31] will retry after 3.560952844s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:40.153097  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:40.153206  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:40.363022  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:40.460438  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:40.653028  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:40.653073  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:40.861984  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:40.960713  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:41.153196  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:41.153351  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:41.361826  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:41.460267  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:41.652784  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:41.652802  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:41.862344  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:41.960834  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:42.152737  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:42.152979  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:42.257147  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:42.362587  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:42.459962  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:42.652593  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:42.652591  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:42.862875  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:42.960672  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:43.153594  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:43.153640  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:43.362502  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:43.459930  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:43.631059  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:43.652889  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:43.653087  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:43.863337  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:43.960266  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 20:56:44.144205  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:44.144237  389648 retry.go:31] will retry after 6.676490417s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:44.152882  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:44.152942  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:44.257493  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:44.362019  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:44.460489  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:44.652917  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:44.653070  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:44.863130  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:44.960584  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:45.153391  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:45.153527  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:45.362608  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:45.460071  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:45.652849  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:45.652915  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:45.862777  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:45.960533  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:46.153667  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:46.153804  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:46.362632  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:46.459907  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:46.652296  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:46.652477  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:46.756788  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:46.862351  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:46.960886  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:47.152391  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:47.152568  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:47.362190  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:47.460736  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:47.653232  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:47.653276  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:47.862018  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:47.960474  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:48.153153  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:48.153187  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:48.361882  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:48.460168  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:48.652729  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:48.652864  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:48.757107  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:48.862689  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:48.960180  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:49.152873  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:49.153024  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:49.362233  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:49.460721  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:49.653148  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:49.653303  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:49.861892  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:49.960294  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:50.153077  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:50.153232  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:50.362407  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:50.460915  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:50.652501  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:50.652591  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:50.821192  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:50.862867  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:50.960502  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:51.153049  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:51.153160  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:51.256873  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	W0904 20:56:51.328889  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:51.328930  389648 retry.go:31] will retry after 8.058478981s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:51.362542  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:51.459958  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:51.652490  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:51.652667  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:51.862401  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:51.960987  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:52.152519  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:52.152675  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:52.362366  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:52.460825  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:52.652376  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:52.652430  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:52.862135  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:52.960933  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:53.152709  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:53.152720  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:53.257375  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:53.361785  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:53.460337  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:53.652733  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:53.653014  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:53.862742  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:53.960136  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:54.152726  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:54.152730  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:54.362518  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:54.461080  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:54.652473  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:54.652664  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:54.862347  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:54.961384  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:55.153124  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:55.153270  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:55.257640  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:55.362463  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:55.460990  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:55.652354  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:55.652574  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:55.862388  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:55.960122  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:56.152694  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:56.152920  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:56.361858  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:56.460337  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:56.653103  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:56.653185  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:56.862426  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:56.960988  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:57.152323  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:57.152431  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:57.362264  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:57.460771  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:57.653160  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:57.653308  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:57.756540  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:57.861955  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:57.960493  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:58.153029  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:58.153223  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:58.362583  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:58.460924  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:58.652481  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:58.652538  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:58.862381  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:58.960880  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:59.152567  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:59.152726  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:59.362851  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:59.387964  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:59.460401  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:59.652881  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:59.653048  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:59.757426  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:59.862341  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0904 20:56:59.907626  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:59.907661  389648 retry.go:31] will retry after 19.126227015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:59.960065  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:00.152732  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:00.152876  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:00.363049  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:00.460514  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:00.653154  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:00.653270  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:00.862296  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:00.961337  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:01.152894  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:01.153019  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:01.362117  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:01.460734  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:01.653271  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:01.653460  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:01.862509  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:01.960047  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:02.152837  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:02.152896  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:57:02.257044  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:57:02.362872  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:02.460517  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:02.653172  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:02.653366  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:02.862373  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:02.961084  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:03.152784  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:03.152910  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:03.362694  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:03.459964  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:03.652371  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:03.652557  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:03.757648  389648 node_ready.go:49] node "addons-049370" is "Ready"
	I0904 20:57:03.757687  389648 node_ready.go:38] duration metric: took 43.003447045s for node "addons-049370" to be "Ready" ...
	I0904 20:57:03.757707  389648 api_server.go:52] waiting for apiserver process to appear ...
	I0904 20:57:03.757770  389648 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 20:57:03.775055  389648 api_server.go:72] duration metric: took 44.653167184s to wait for apiserver process to appear ...
	I0904 20:57:03.775146  389648 api_server.go:88] waiting for apiserver healthz status ...
	I0904 20:57:03.775175  389648 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 20:57:03.847773  389648 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0904 20:57:03.848894  389648 api_server.go:141] control plane version: v1.34.0
	I0904 20:57:03.848928  389648 api_server.go:131] duration metric: took 73.768685ms to wait for apiserver health ...
	I0904 20:57:03.848941  389648 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 20:57:03.853285  389648 system_pods.go:59] 20 kube-system pods found
	I0904 20:57:03.853319  389648 system_pods.go:61] "amd-gpu-device-plugin-ddj6n" [dd553d62-6014-4d9a-b40a-e960cd942c31] Pending
	I0904 20:57:03.853326  389648 system_pods.go:61] "coredns-66bc5c9577-m8z9t" [461c8bca-3775-4a4e-a6ea-46896415415c] Pending
	I0904 20:57:03.853331  389648 system_pods.go:61] "csi-hostpath-attacher-0" [a2237a13-ee99-438b-8cf6-3c88c95e7d6b] Pending
	I0904 20:57:03.853336  389648 system_pods.go:61] "csi-hostpath-resizer-0" [42e75435-91d0-4b2b-a7f7-5dc7aed8258e] Pending
	I0904 20:57:03.853341  389648 system_pods.go:61] "csi-hostpathplugin-98s7l" [0f04c863-d08f-4ce8-8aba-f8431c710bad] Pending
	I0904 20:57:03.853346  389648 system_pods.go:61] "etcd-addons-049370" [a5b5bef6-6ee7-457c-8ed6-7fcd25717290] Running
	I0904 20:57:03.853352  389648 system_pods.go:61] "kindnet-7bfb9" [255cca7e-1239-404f-a063-333e80f7b32c] Running
	I0904 20:57:03.853358  389648 system_pods.go:61] "kube-apiserver-addons-049370" [1bf36e27-c0a6-4950-9554-56e2abea00d9] Running
	I0904 20:57:03.853366  389648 system_pods.go:61] "kube-controller-manager-addons-049370" [10f65f3d-208b-4468-88b3-7bd1f31a99ef] Running
	I0904 20:57:03.853372  389648 system_pods.go:61] "kube-ingress-dns-minikube" [1f14a514-2ff6-4572-97f7-91a8ef6a1e64] Pending
	I0904 20:57:03.853380  389648 system_pods.go:61] "kube-proxy-k5lnm" [f9894455-f642-473d-95bc-6d2aeccae2cf] Running
	I0904 20:57:03.853389  389648 system_pods.go:61] "kube-scheduler-addons-049370" [a95d0b5b-fd44-47f5-bf4d-55c2038da3de] Running
	I0904 20:57:03.853403  389648 system_pods.go:61] "metrics-server-85b7d694d7-kgh4z" [85361692-2630-41a0-bd24-f432d8ed4a23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 20:57:03.853412  389648 system_pods.go:61] "nvidia-device-plugin-daemonset-kqhfs" [d73bdd1f-d0db-435b-89c5-fc48b0ac0590] Pending
	I0904 20:57:03.853423  389648 system_pods.go:61] "registry-66898fdd98-2lbsx" [7bb317fd-9e98-41e6-a9bd-15826d701411] Pending
	I0904 20:57:03.853431  389648 system_pods.go:61] "registry-creds-764b6fb674-zkcq2" [ba1e4f2c-5cae-4b0d-be29-bb07606db48c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0904 20:57:03.853439  389648 system_pods.go:61] "registry-proxy-84zdv" [83f3d976-f7e6-44f0-a75e-408cd4584cfd] Pending
	I0904 20:57:03.853445  389648 system_pods.go:61] "snapshot-controller-7d9fbc56b8-5d9jh" [c0db4104-0944-4863-b900-e7db691bf3f3] Pending
	I0904 20:57:03.853455  389648 system_pods.go:61] "snapshot-controller-7d9fbc56b8-mgxvk" [afd9e3b4-4dc1-41fb-ae65-96ec88d7c7b8] Pending
	I0904 20:57:03.853460  389648 system_pods.go:61] "storage-provisioner" [8fd9befb-6934-4cdc-bb01-07e92f780dfa] Pending
	I0904 20:57:03.853471  389648 system_pods.go:74] duration metric: took 4.521878ms to wait for pod list to return data ...
	I0904 20:57:03.853485  389648 default_sa.go:34] waiting for default service account to be created ...
	I0904 20:57:03.855589  389648 default_sa.go:45] found service account: "default"
	I0904 20:57:03.855645  389648 default_sa.go:55] duration metric: took 2.148457ms for default service account to be created ...
	I0904 20:57:03.855669  389648 system_pods.go:116] waiting for k8s-apps to be running ...
	I0904 20:57:03.864140  389648 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0904 20:57:03.864166  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:03.865511  389648 system_pods.go:86] 20 kube-system pods found
	I0904 20:57:03.865543  389648 system_pods.go:89] "amd-gpu-device-plugin-ddj6n" [dd553d62-6014-4d9a-b40a-e960cd942c31] Pending
	I0904 20:57:03.865552  389648 system_pods.go:89] "coredns-66bc5c9577-m8z9t" [461c8bca-3775-4a4e-a6ea-46896415415c] Pending
	I0904 20:57:03.865558  389648 system_pods.go:89] "csi-hostpath-attacher-0" [a2237a13-ee99-438b-8cf6-3c88c95e7d6b] Pending
	I0904 20:57:03.865563  389648 system_pods.go:89] "csi-hostpath-resizer-0" [42e75435-91d0-4b2b-a7f7-5dc7aed8258e] Pending
	I0904 20:57:03.865568  389648 system_pods.go:89] "csi-hostpathplugin-98s7l" [0f04c863-d08f-4ce8-8aba-f8431c710bad] Pending
	I0904 20:57:03.865574  389648 system_pods.go:89] "etcd-addons-049370" [a5b5bef6-6ee7-457c-8ed6-7fcd25717290] Running
	I0904 20:57:03.865580  389648 system_pods.go:89] "kindnet-7bfb9" [255cca7e-1239-404f-a063-333e80f7b32c] Running
	I0904 20:57:03.865586  389648 system_pods.go:89] "kube-apiserver-addons-049370" [1bf36e27-c0a6-4950-9554-56e2abea00d9] Running
	I0904 20:57:03.865591  389648 system_pods.go:89] "kube-controller-manager-addons-049370" [10f65f3d-208b-4468-88b3-7bd1f31a99ef] Running
	I0904 20:57:03.865595  389648 system_pods.go:89] "kube-ingress-dns-minikube" [1f14a514-2ff6-4572-97f7-91a8ef6a1e64] Pending
	I0904 20:57:03.865599  389648 system_pods.go:89] "kube-proxy-k5lnm" [f9894455-f642-473d-95bc-6d2aeccae2cf] Running
	I0904 20:57:03.865602  389648 system_pods.go:89] "kube-scheduler-addons-049370" [a95d0b5b-fd44-47f5-bf4d-55c2038da3de] Running
	I0904 20:57:03.865611  389648 system_pods.go:89] "metrics-server-85b7d694d7-kgh4z" [85361692-2630-41a0-bd24-f432d8ed4a23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 20:57:03.865621  389648 system_pods.go:89] "nvidia-device-plugin-daemonset-kqhfs" [d73bdd1f-d0db-435b-89c5-fc48b0ac0590] Pending
	I0904 20:57:03.865627  389648 system_pods.go:89] "registry-66898fdd98-2lbsx" [7bb317fd-9e98-41e6-a9bd-15826d701411] Pending
	I0904 20:57:03.865631  389648 system_pods.go:89] "registry-creds-764b6fb674-zkcq2" [ba1e4f2c-5cae-4b0d-be29-bb07606db48c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0904 20:57:03.865635  389648 system_pods.go:89] "registry-proxy-84zdv" [83f3d976-f7e6-44f0-a75e-408cd4584cfd] Pending
	I0904 20:57:03.865639  389648 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5d9jh" [c0db4104-0944-4863-b900-e7db691bf3f3] Pending
	I0904 20:57:03.865645  389648 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mgxvk" [afd9e3b4-4dc1-41fb-ae65-96ec88d7c7b8] Pending
	I0904 20:57:03.865650  389648 system_pods.go:89] "storage-provisioner" [8fd9befb-6934-4cdc-bb01-07e92f780dfa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 20:57:03.865666  389648 retry.go:31] will retry after 266.681541ms: missing components: kube-dns
	I0904 20:57:03.963849  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:04.148992  389648 system_pods.go:86] 20 kube-system pods found
	I0904 20:57:04.149036  389648 system_pods.go:89] "amd-gpu-device-plugin-ddj6n" [dd553d62-6014-4d9a-b40a-e960cd942c31] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0904 20:57:04.149049  389648 system_pods.go:89] "coredns-66bc5c9577-m8z9t" [461c8bca-3775-4a4e-a6ea-46896415415c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 20:57:04.149060  389648 system_pods.go:89] "csi-hostpath-attacher-0" [a2237a13-ee99-438b-8cf6-3c88c95e7d6b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0904 20:57:04.149065  389648 system_pods.go:89] "csi-hostpath-resizer-0" [42e75435-91d0-4b2b-a7f7-5dc7aed8258e] Pending
	I0904 20:57:04.149077  389648 system_pods.go:89] "csi-hostpathplugin-98s7l" [0f04c863-d08f-4ce8-8aba-f8431c710bad] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0904 20:57:04.149083  389648 system_pods.go:89] "etcd-addons-049370" [a5b5bef6-6ee7-457c-8ed6-7fcd25717290] Running
	I0904 20:57:04.149090  389648 system_pods.go:89] "kindnet-7bfb9" [255cca7e-1239-404f-a063-333e80f7b32c] Running
	I0904 20:57:04.149095  389648 system_pods.go:89] "kube-apiserver-addons-049370" [1bf36e27-c0a6-4950-9554-56e2abea00d9] Running
	I0904 20:57:04.149101  389648 system_pods.go:89] "kube-controller-manager-addons-049370" [10f65f3d-208b-4468-88b3-7bd1f31a99ef] Running
	I0904 20:57:04.149158  389648 system_pods.go:89] "kube-ingress-dns-minikube" [1f14a514-2ff6-4572-97f7-91a8ef6a1e64] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0904 20:57:04.149164  389648 system_pods.go:89] "kube-proxy-k5lnm" [f9894455-f642-473d-95bc-6d2aeccae2cf] Running
	I0904 20:57:04.149171  389648 system_pods.go:89] "kube-scheduler-addons-049370" [a95d0b5b-fd44-47f5-bf4d-55c2038da3de] Running
	I0904 20:57:04.149179  389648 system_pods.go:89] "metrics-server-85b7d694d7-kgh4z" [85361692-2630-41a0-bd24-f432d8ed4a23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 20:57:04.149188  389648 system_pods.go:89] "nvidia-device-plugin-daemonset-kqhfs" [d73bdd1f-d0db-435b-89c5-fc48b0ac0590] Pending
	I0904 20:57:04.149196  389648 system_pods.go:89] "registry-66898fdd98-2lbsx" [7bb317fd-9e98-41e6-a9bd-15826d701411] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0904 20:57:04.149207  389648 system_pods.go:89] "registry-creds-764b6fb674-zkcq2" [ba1e4f2c-5cae-4b0d-be29-bb07606db48c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0904 20:57:04.149216  389648 system_pods.go:89] "registry-proxy-84zdv" [83f3d976-f7e6-44f0-a75e-408cd4584cfd] Pending
	I0904 20:57:04.149226  389648 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5d9jh" [c0db4104-0944-4863-b900-e7db691bf3f3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 20:57:04.149236  389648 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mgxvk" [afd9e3b4-4dc1-41fb-ae65-96ec88d7c7b8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 20:57:04.149249  389648 system_pods.go:89] "storage-provisioner" [8fd9befb-6934-4cdc-bb01-07e92f780dfa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 20:57:04.149269  389648 retry.go:31] will retry after 384.617911ms: missing components: kube-dns
	I0904 20:57:04.154716  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:04.154839  389648 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0904 20:57:04.154853  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:04.366569  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:04.466268  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:04.567997  389648 system_pods.go:86] 20 kube-system pods found
	I0904 20:57:04.568030  389648 system_pods.go:89] "amd-gpu-device-plugin-ddj6n" [dd553d62-6014-4d9a-b40a-e960cd942c31] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0904 20:57:04.568038  389648 system_pods.go:89] "coredns-66bc5c9577-m8z9t" [461c8bca-3775-4a4e-a6ea-46896415415c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 20:57:04.568045  389648 system_pods.go:89] "csi-hostpath-attacher-0" [a2237a13-ee99-438b-8cf6-3c88c95e7d6b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0904 20:57:04.568050  389648 system_pods.go:89] "csi-hostpath-resizer-0" [42e75435-91d0-4b2b-a7f7-5dc7aed8258e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0904 20:57:04.568057  389648 system_pods.go:89] "csi-hostpathplugin-98s7l" [0f04c863-d08f-4ce8-8aba-f8431c710bad] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0904 20:57:04.568063  389648 system_pods.go:89] "etcd-addons-049370" [a5b5bef6-6ee7-457c-8ed6-7fcd25717290] Running
	I0904 20:57:04.568067  389648 system_pods.go:89] "kindnet-7bfb9" [255cca7e-1239-404f-a063-333e80f7b32c] Running
	I0904 20:57:04.568071  389648 system_pods.go:89] "kube-apiserver-addons-049370" [1bf36e27-c0a6-4950-9554-56e2abea00d9] Running
	I0904 20:57:04.568074  389648 system_pods.go:89] "kube-controller-manager-addons-049370" [10f65f3d-208b-4468-88b3-7bd1f31a99ef] Running
	I0904 20:57:04.568081  389648 system_pods.go:89] "kube-ingress-dns-minikube" [1f14a514-2ff6-4572-97f7-91a8ef6a1e64] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0904 20:57:04.568086  389648 system_pods.go:89] "kube-proxy-k5lnm" [f9894455-f642-473d-95bc-6d2aeccae2cf] Running
	I0904 20:57:04.568091  389648 system_pods.go:89] "kube-scheduler-addons-049370" [a95d0b5b-fd44-47f5-bf4d-55c2038da3de] Running
	I0904 20:57:04.568096  389648 system_pods.go:89] "metrics-server-85b7d694d7-kgh4z" [85361692-2630-41a0-bd24-f432d8ed4a23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 20:57:04.568110  389648 system_pods.go:89] "nvidia-device-plugin-daemonset-kqhfs" [d73bdd1f-d0db-435b-89c5-fc48b0ac0590] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0904 20:57:04.568115  389648 system_pods.go:89] "registry-66898fdd98-2lbsx" [7bb317fd-9e98-41e6-a9bd-15826d701411] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0904 20:57:04.568122  389648 system_pods.go:89] "registry-creds-764b6fb674-zkcq2" [ba1e4f2c-5cae-4b0d-be29-bb07606db48c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0904 20:57:04.568127  389648 system_pods.go:89] "registry-proxy-84zdv" [83f3d976-f7e6-44f0-a75e-408cd4584cfd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0904 20:57:04.568135  389648 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5d9jh" [c0db4104-0944-4863-b900-e7db691bf3f3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 20:57:04.568140  389648 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mgxvk" [afd9e3b4-4dc1-41fb-ae65-96ec88d7c7b8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 20:57:04.568147  389648 system_pods.go:89] "storage-provisioner" [8fd9befb-6934-4cdc-bb01-07e92f780dfa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 20:57:04.568163  389648 retry.go:31] will retry after 481.666443ms: missing components: kube-dns
	I0904 20:57:04.667086  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:04.667538  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:04.862644  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:04.959928  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:05.053770  389648 system_pods.go:86] 20 kube-system pods found
	I0904 20:57:05.053813  389648 system_pods.go:89] "amd-gpu-device-plugin-ddj6n" [dd553d62-6014-4d9a-b40a-e960cd942c31] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0904 20:57:05.053821  389648 system_pods.go:89] "coredns-66bc5c9577-m8z9t" [461c8bca-3775-4a4e-a6ea-46896415415c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 20:57:05.053829  389648 system_pods.go:89] "csi-hostpath-attacher-0" [a2237a13-ee99-438b-8cf6-3c88c95e7d6b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0904 20:57:05.053834  389648 system_pods.go:89] "csi-hostpath-resizer-0" [42e75435-91d0-4b2b-a7f7-5dc7aed8258e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0904 20:57:05.053840  389648 system_pods.go:89] "csi-hostpathplugin-98s7l" [0f04c863-d08f-4ce8-8aba-f8431c710bad] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0904 20:57:05.053846  389648 system_pods.go:89] "etcd-addons-049370" [a5b5bef6-6ee7-457c-8ed6-7fcd25717290] Running
	I0904 20:57:05.053850  389648 system_pods.go:89] "kindnet-7bfb9" [255cca7e-1239-404f-a063-333e80f7b32c] Running
	I0904 20:57:05.053854  389648 system_pods.go:89] "kube-apiserver-addons-049370" [1bf36e27-c0a6-4950-9554-56e2abea00d9] Running
	I0904 20:57:05.053858  389648 system_pods.go:89] "kube-controller-manager-addons-049370" [10f65f3d-208b-4468-88b3-7bd1f31a99ef] Running
	I0904 20:57:05.053863  389648 system_pods.go:89] "kube-ingress-dns-minikube" [1f14a514-2ff6-4572-97f7-91a8ef6a1e64] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0904 20:57:05.053871  389648 system_pods.go:89] "kube-proxy-k5lnm" [f9894455-f642-473d-95bc-6d2aeccae2cf] Running
	I0904 20:57:05.053875  389648 system_pods.go:89] "kube-scheduler-addons-049370" [a95d0b5b-fd44-47f5-bf4d-55c2038da3de] Running
	I0904 20:57:05.053880  389648 system_pods.go:89] "metrics-server-85b7d694d7-kgh4z" [85361692-2630-41a0-bd24-f432d8ed4a23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 20:57:05.053887  389648 system_pods.go:89] "nvidia-device-plugin-daemonset-kqhfs" [d73bdd1f-d0db-435b-89c5-fc48b0ac0590] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0904 20:57:05.053893  389648 system_pods.go:89] "registry-66898fdd98-2lbsx" [7bb317fd-9e98-41e6-a9bd-15826d701411] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0904 20:57:05.053900  389648 system_pods.go:89] "registry-creds-764b6fb674-zkcq2" [ba1e4f2c-5cae-4b0d-be29-bb07606db48c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0904 20:57:05.053905  389648 system_pods.go:89] "registry-proxy-84zdv" [83f3d976-f7e6-44f0-a75e-408cd4584cfd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0904 20:57:05.053912  389648 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5d9jh" [c0db4104-0944-4863-b900-e7db691bf3f3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 20:57:05.053918  389648 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mgxvk" [afd9e3b4-4dc1-41fb-ae65-96ec88d7c7b8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 20:57:05.053924  389648 system_pods.go:89] "storage-provisioner" [8fd9befb-6934-4cdc-bb01-07e92f780dfa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 20:57:05.053939  389648 retry.go:31] will retry after 484.806352ms: missing components: kube-dns
	I0904 20:57:05.153022  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:05.153142  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:05.363067  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:05.460377  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:05.543458  389648 system_pods.go:86] 20 kube-system pods found
	I0904 20:57:05.543495  389648 system_pods.go:89] "amd-gpu-device-plugin-ddj6n" [dd553d62-6014-4d9a-b40a-e960cd942c31] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0904 20:57:05.543501  389648 system_pods.go:89] "coredns-66bc5c9577-m8z9t" [461c8bca-3775-4a4e-a6ea-46896415415c] Running
	I0904 20:57:05.543508  389648 system_pods.go:89] "csi-hostpath-attacher-0" [a2237a13-ee99-438b-8cf6-3c88c95e7d6b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0904 20:57:05.543514  389648 system_pods.go:89] "csi-hostpath-resizer-0" [42e75435-91d0-4b2b-a7f7-5dc7aed8258e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0904 20:57:05.543520  389648 system_pods.go:89] "csi-hostpathplugin-98s7l" [0f04c863-d08f-4ce8-8aba-f8431c710bad] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0904 20:57:05.543525  389648 system_pods.go:89] "etcd-addons-049370" [a5b5bef6-6ee7-457c-8ed6-7fcd25717290] Running
	I0904 20:57:05.543530  389648 system_pods.go:89] "kindnet-7bfb9" [255cca7e-1239-404f-a063-333e80f7b32c] Running
	I0904 20:57:05.543542  389648 system_pods.go:89] "kube-apiserver-addons-049370" [1bf36e27-c0a6-4950-9554-56e2abea00d9] Running
	I0904 20:57:05.543552  389648 system_pods.go:89] "kube-controller-manager-addons-049370" [10f65f3d-208b-4468-88b3-7bd1f31a99ef] Running
	I0904 20:57:05.543557  389648 system_pods.go:89] "kube-ingress-dns-minikube" [1f14a514-2ff6-4572-97f7-91a8ef6a1e64] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0904 20:57:05.543563  389648 system_pods.go:89] "kube-proxy-k5lnm" [f9894455-f642-473d-95bc-6d2aeccae2cf] Running
	I0904 20:57:05.543567  389648 system_pods.go:89] "kube-scheduler-addons-049370" [a95d0b5b-fd44-47f5-bf4d-55c2038da3de] Running
	I0904 20:57:05.543571  389648 system_pods.go:89] "metrics-server-85b7d694d7-kgh4z" [85361692-2630-41a0-bd24-f432d8ed4a23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 20:57:05.543579  389648 system_pods.go:89] "nvidia-device-plugin-daemonset-kqhfs" [d73bdd1f-d0db-435b-89c5-fc48b0ac0590] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0904 20:57:05.543585  389648 system_pods.go:89] "registry-66898fdd98-2lbsx" [7bb317fd-9e98-41e6-a9bd-15826d701411] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0904 20:57:05.543593  389648 system_pods.go:89] "registry-creds-764b6fb674-zkcq2" [ba1e4f2c-5cae-4b0d-be29-bb07606db48c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0904 20:57:05.543598  389648 system_pods.go:89] "registry-proxy-84zdv" [83f3d976-f7e6-44f0-a75e-408cd4584cfd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0904 20:57:05.543605  389648 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5d9jh" [c0db4104-0944-4863-b900-e7db691bf3f3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 20:57:05.543612  389648 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mgxvk" [afd9e3b4-4dc1-41fb-ae65-96ec88d7c7b8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 20:57:05.543618  389648 system_pods.go:89] "storage-provisioner" [8fd9befb-6934-4cdc-bb01-07e92f780dfa] Running
	I0904 20:57:05.543626  389648 system_pods.go:126] duration metric: took 1.687941335s to wait for k8s-apps to be running ...
	I0904 20:57:05.543650  389648 system_svc.go:44] waiting for kubelet service to be running ....
	I0904 20:57:05.543694  389648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 20:57:05.555385  389648 system_svc.go:56] duration metric: took 11.725653ms WaitForService to wait for kubelet
	I0904 20:57:05.555412  389648 kubeadm.go:578] duration metric: took 46.433531844s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 20:57:05.555439  389648 node_conditions.go:102] verifying NodePressure condition ...
	I0904 20:57:05.558136  389648 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0904 20:57:05.558169  389648 node_conditions.go:123] node cpu capacity is 8
	I0904 20:57:05.558187  389648 node_conditions.go:105] duration metric: took 2.741859ms to run NodePressure ...
	I0904 20:57:05.558203  389648 start.go:241] waiting for startup goroutines ...
	I0904 20:57:05.653335  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:05.653493  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:05.862594  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:05.960405  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:06.155853  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:06.155860  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:06.363166  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:06.460689  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:06.653352  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:06.653395  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:06.862486  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:06.960974  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:07.152583  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:07.152693  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:07.362526  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:07.461234  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:07.653353  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:07.653430  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:07.862588  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:07.961373  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:08.153869  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:08.153919  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:08.363098  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:08.460845  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:08.652618  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:08.652818  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:08.863708  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:08.961239  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:09.153619  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:09.153661  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:09.363027  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:09.461172  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:09.653178  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:09.653259  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:09.862455  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:09.961183  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:10.153505  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:10.153868  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:10.362513  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:10.460913  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:10.653892  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:10.654021  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:10.863179  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:10.961003  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:11.152924  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:11.152937  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:11.363254  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:11.460435  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:11.653707  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:11.653749  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:11.862653  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:11.960670  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:12.153474  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:12.153582  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:12.362607  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:12.460401  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:12.653547  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:12.653621  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:12.863488  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:12.961428  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:13.153780  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:13.153926  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:13.363601  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:13.463509  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:13.653590  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:13.653721  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:13.863091  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:13.960747  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:14.156722  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:14.156892  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:14.363724  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:14.460915  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:14.652850  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:14.652930  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:14.863379  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:14.960898  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:15.153105  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:15.153190  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:15.364645  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:15.466746  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:15.653529  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:15.653552  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:15.863473  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:15.961399  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:16.153418  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:16.153633  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:16.365659  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:16.460427  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:16.655316  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:16.656314  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:16.863846  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:16.960170  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:17.153040  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:17.153440  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:17.362488  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:17.461324  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:17.653058  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:17.653099  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:17.862919  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:17.960632  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:18.153699  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:18.153804  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:18.362710  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:18.460244  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:18.653100  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:18.653412  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:18.862825  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:18.963826  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:19.034934  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:57:19.152876  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:19.153003  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:19.363216  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:19.461101  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:19.654705  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:19.654966  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:19.862758  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:19.960238  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 20:57:19.965214  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:57:19.965249  389648 retry.go:31] will retry after 20.693378838s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:57:20.153317  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:20.153424  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:20.362498  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:20.461668  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:20.653715  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:20.653849  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:20.862660  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:20.960422  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:21.153279  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:21.153367  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:21.362521  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:21.461453  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:21.653611  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:21.653616  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:21.862958  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:21.960988  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:22.152881  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:22.152896  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:22.362933  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:22.460865  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:22.652773  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:22.652825  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:22.862669  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:22.960462  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:23.153822  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:23.154026  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:23.362981  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:23.460282  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:23.653482  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:23.653565  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:23.862339  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:23.960741  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:24.153397  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:24.153562  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:24.362213  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:24.460604  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:24.653463  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:24.653585  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:24.862661  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:24.960921  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:25.152671  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:25.152676  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:25.362282  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:25.460981  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:25.652991  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:25.653126  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:25.863187  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:25.960971  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:26.155115  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:26.155549  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:26.364641  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:26.460565  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:26.653351  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:26.653460  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:26.862335  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:26.961215  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:27.153245  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:27.153382  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:27.362420  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:27.460886  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:27.652946  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:27.653004  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:27.862794  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:27.960433  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:28.153554  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:28.153563  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:28.362061  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:28.460951  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:28.653077  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:28.653166  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:28.862812  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:28.960910  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:29.152712  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:29.152713  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:29.362969  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:29.460457  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:29.653716  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:29.653816  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:29.862674  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:29.960527  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:30.153309  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:30.153467  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:30.364159  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:30.465320  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:30.653741  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:30.653775  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:30.862640  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:30.963437  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:31.153238  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:31.153259  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:31.362036  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:31.460565  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:31.653248  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:31.653298  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:31.863300  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:31.960651  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:32.153238  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:32.153326  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:32.362194  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:32.460483  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:32.653633  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:32.653670  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:32.862856  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:32.960920  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:33.163353  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:33.163571  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:33.363807  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:33.463275  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:33.661398  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:33.661866  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:34.067754  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:34.158198  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:34.252681  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:34.267829  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:34.462230  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:34.462566  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:34.655260  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:34.655318  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:34.862629  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:34.960553  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:35.153838  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:35.153871  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:35.363148  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:35.461050  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:35.653525  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:35.653658  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:35.864175  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:35.961508  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:36.154202  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:36.154257  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:36.363162  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:36.460840  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:36.653022  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:36.653219  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:36.863704  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:36.960613  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:37.153938  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:37.153958  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:37.363084  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:37.461050  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:37.652708  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:37.652726  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:37.862959  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:37.960607  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:38.153906  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:38.154265  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:38.363779  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:38.460618  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:38.653662  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:38.653739  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:38.862850  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:38.960535  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:39.153828  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:39.153870  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:39.363192  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:39.461549  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:39.653371  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:39.653594  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:39.862436  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:39.961060  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:40.153255  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:40.153265  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:40.362463  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:40.461112  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:40.653195  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:40.653238  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:40.659168  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:57:40.863485  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:40.961390  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:41.153507  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:41.153683  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:41.363294  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:41.460511  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 20:57:41.586847  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:57:41.586876  389648 retry.go:31] will retry after 18.584233469s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:57:41.653116  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:41.653297  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:41.864041  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:41.960341  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:42.153090  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:42.153093  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:42.363050  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:42.460434  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:42.653587  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:42.653634  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:42.862872  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:42.960883  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:43.153266  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:43.153570  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:43.362999  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:43.460713  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:43.653498  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:43.653565  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:43.862351  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:43.960779  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:44.152645  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:44.152744  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:44.362647  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:44.460216  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:44.653789  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:44.654025  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:44.863259  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:44.961105  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:45.153229  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:45.153267  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:45.363497  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:45.461501  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:45.653400  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:45.653589  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:45.862262  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:45.960864  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:46.152860  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:46.152890  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:46.363058  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:46.460848  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:46.653051  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:46.653077  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:46.863163  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:46.960859  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:47.153234  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:47.153238  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:47.363116  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:47.460543  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:47.653774  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:47.653836  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:47.863023  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:47.961011  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:48.153044  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:48.153183  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:48.363514  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:48.461320  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:48.653777  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:48.653858  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:48.862550  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:48.961142  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:49.153028  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:49.153220  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:49.362652  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:49.459891  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:49.653164  389648 kapi.go:107] duration metric: took 1m24.503386944s to wait for kubernetes.io/minikube-addons=registry ...
	I0904 20:57:49.653212  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:49.862954  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:49.960303  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:50.153422  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:50.362439  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:50.460798  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:50.653419  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:50.862686  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:50.960970  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:51.154179  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:51.363166  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:51.460875  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:51.652647  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:51.863070  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:51.960526  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:52.153813  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:52.362711  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:52.460087  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:52.653154  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:52.863206  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:52.960823  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:53.153125  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:53.363443  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:53.461004  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:53.656643  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:53.866801  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:53.961469  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:54.153974  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:54.364415  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:54.461643  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:54.655730  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:54.867016  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:54.961177  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:55.155271  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:55.363462  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:55.461909  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:55.654080  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:55.862506  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:55.962401  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:56.153639  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:56.363134  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:56.460790  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:56.653986  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:56.862951  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:56.959890  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:57.152935  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:57.363141  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:57.460860  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:57.653029  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:57.863171  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:57.961135  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:58.153239  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:58.363391  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:58.460583  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:58.654112  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:58.863905  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:58.960604  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:59.153765  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:59.363398  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:59.460827  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:59.653240  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:59.863414  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:59.960740  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:00.154243  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:58:00.172145  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:58:00.363535  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:00.460166  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:00.653062  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:58:00.863155  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:00.960597  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:01.153145  389648 kapi.go:107] duration metric: took 1m36.0033494s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0904 20:58:01.362175  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.189987346s)
	W0904 20:58:01.362237  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0904 20:58:01.362358  389648 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0904 20:58:01.377323  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:01.461301  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:01.862664  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:01.960172  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:02.362264  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:02.460782  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:02.863228  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:02.960690  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:03.362947  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:03.461061  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:03.863740  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:03.960182  389648 kapi.go:107] duration metric: took 1m34.502765752s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0904 20:58:03.962033  389648 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-049370 cluster.
	I0904 20:58:03.963517  389648 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0904 20:58:03.964745  389648 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0904 20:58:04.362552  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:04.863544  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:05.363523  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:05.862668  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:06.363450  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:06.862835  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:07.363579  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:07.862482  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:08.362742  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:08.863840  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:09.365433  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:09.862609  389648 kapi.go:107] duration metric: took 1m43.503273609s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0904 20:58:09.864811  389648 out.go:179] * Enabled addons: amd-gpu-device-plugin, storage-provisioner, ingress-dns, registry-creds, nvidia-device-plugin, default-storageclass, cloud-spanner, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0904 20:58:09.865999  389648 addons.go:514] duration metric: took 1m50.744105832s for enable addons: enabled=[amd-gpu-device-plugin storage-provisioner ingress-dns registry-creds nvidia-device-plugin default-storageclass cloud-spanner metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0904 20:58:09.866049  389648 start.go:246] waiting for cluster config update ...
	I0904 20:58:09.866079  389648 start.go:255] writing updated cluster config ...
	I0904 20:58:09.866376  389648 ssh_runner.go:195] Run: rm -f paused
	I0904 20:58:09.869857  389648 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 20:58:09.872605  389648 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-m8z9t" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:09.876507  389648 pod_ready.go:94] pod "coredns-66bc5c9577-m8z9t" is "Ready"
	I0904 20:58:09.876529  389648 pod_ready.go:86] duration metric: took 3.904383ms for pod "coredns-66bc5c9577-m8z9t" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:09.878366  389648 pod_ready.go:83] waiting for pod "etcd-addons-049370" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:09.881658  389648 pod_ready.go:94] pod "etcd-addons-049370" is "Ready"
	I0904 20:58:09.881678  389648 pod_ready.go:86] duration metric: took 3.291911ms for pod "etcd-addons-049370" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:09.883326  389648 pod_ready.go:83] waiting for pod "kube-apiserver-addons-049370" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:09.886438  389648 pod_ready.go:94] pod "kube-apiserver-addons-049370" is "Ready"
	I0904 20:58:09.886456  389648 pod_ready.go:86] duration metric: took 3.11401ms for pod "kube-apiserver-addons-049370" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:09.888020  389648 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-049370" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:10.273761  389648 pod_ready.go:94] pod "kube-controller-manager-addons-049370" is "Ready"
	I0904 20:58:10.273790  389648 pod_ready.go:86] duration metric: took 385.749346ms for pod "kube-controller-manager-addons-049370" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:10.473572  389648 pod_ready.go:83] waiting for pod "kube-proxy-k5lnm" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:10.873887  389648 pod_ready.go:94] pod "kube-proxy-k5lnm" is "Ready"
	I0904 20:58:10.873914  389648 pod_ready.go:86] duration metric: took 400.319117ms for pod "kube-proxy-k5lnm" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:11.074268  389648 pod_ready.go:83] waiting for pod "kube-scheduler-addons-049370" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:11.473936  389648 pod_ready.go:94] pod "kube-scheduler-addons-049370" is "Ready"
	I0904 20:58:11.473971  389648 pod_ready.go:86] duration metric: took 399.67197ms for pod "kube-scheduler-addons-049370" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:11.473987  389648 pod_ready.go:40] duration metric: took 1.604097075s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 20:58:11.514779  389648 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0904 20:58:11.516435  389648 out.go:179] * Done! kubectl is now configured to use "addons-049370" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 04 21:03:12 addons-049370 crio[1043]: time="2025-09-04 21:03:12.978890155Z" level=info msg="Image docker.io/nginx:alpine not found" id=fc08c487-a0bb-4f35-94f2-495aea07524e name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:03:23 addons-049370 crio[1043]: time="2025-09-04 21:03:23.978489059Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=34ed16c2-7a7a-4780-8592-c5fac9f5f298 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:03:23 addons-049370 crio[1043]: time="2025-09-04 21:03:23.978789742Z" level=info msg="Image docker.io/nginx:alpine not found" id=34ed16c2-7a7a-4780-8592-c5fac9f5f298 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:03:29 addons-049370 crio[1043]: time="2025-09-04 21:03:29.792807884Z" level=info msg="Pulling image: docker.io/nginx:latest" id=3162ef9d-efd8-4d0d-af97-6fbe20f89e24 name=/runtime.v1.ImageService/PullImage
	Sep 04 21:03:29 addons-049370 crio[1043]: time="2025-09-04 21:03:29.796436092Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Sep 04 21:03:30 addons-049370 crio[1043]: time="2025-09-04 21:03:30.517487489Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=03d20a44-a068-443e-b1b5-dbaf55a411e9 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:03:30 addons-049370 crio[1043]: time="2025-09-04 21:03:30.517844235Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=03d20a44-a068-443e-b1b5-dbaf55a411e9 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:03:36 addons-049370 crio[1043]: time="2025-09-04 21:03:36.978073879Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=b5ec0976-421d-4554-940b-e27aa72bf5a6 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:03:36 addons-049370 crio[1043]: time="2025-09-04 21:03:36.978369963Z" level=info msg="Image docker.io/nginx:alpine not found" id=b5ec0976-421d-4554-940b-e27aa72bf5a6 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:03:44 addons-049370 crio[1043]: time="2025-09-04 21:03:44.977944693Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=302d6273-ed57-4f2c-855e-c09858ecadd8 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:03:44 addons-049370 crio[1043]: time="2025-09-04 21:03:44.978218340Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=302d6273-ed57-4f2c-855e-c09858ecadd8 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:03:47 addons-049370 crio[1043]: time="2025-09-04 21:03:47.978127071Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=51192e80-0d1a-4b82-8cee-756759ecf36b name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:03:47 addons-049370 crio[1043]: time="2025-09-04 21:03:47.978435226Z" level=info msg="Image docker.io/nginx:alpine not found" id=51192e80-0d1a-4b82-8cee-756759ecf36b name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:04:00 addons-049370 crio[1043]: time="2025-09-04 21:04:00.451710274Z" level=info msg="Pulling image: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=2624ab5b-cc49-49e1-9f08-ded890f628ca name=/runtime.v1.ImageService/PullImage
	Sep 04 21:04:00 addons-049370 crio[1043]: time="2025-09-04 21:04:00.468036869Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	Sep 04 21:05:01 addons-049370 crio[1043]: time="2025-09-04 21:05:01.621717899Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=c3235f93-4534-437d-a02d-b95205408eb8 name=/runtime.v1.ImageService/PullImage
	Sep 04 21:05:01 addons-049370 crio[1043]: time="2025-09-04 21:05:01.636092211Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Sep 04 21:05:01 addons-049370 crio[1043]: time="2025-09-04 21:05:01.740079042Z" level=info msg="Stopping pod sandbox: 1df2dd8ad5dd26c328602fafd24cb6a504999460008dc2ec6fd5a0505df856c4" id=cd0fb8d2-a128-48f7-ae4d-46cd8fe6dfbd name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 04 21:05:01 addons-049370 crio[1043]: time="2025-09-04 21:05:01.740400914Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-ca8c5764-63fd-4dd4-a9e0-9769c1ad0f4b Namespace:local-path-storage ID:1df2dd8ad5dd26c328602fafd24cb6a504999460008dc2ec6fd5a0505df856c4 UID:96904e25-b0d6-4506-8c7c-03307f38bc2b NetNS:/var/run/netns/ac753f82-5759-4a61-958a-36a4cff11c06 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 04 21:05:01 addons-049370 crio[1043]: time="2025-09-04 21:05:01.740597173Z" level=info msg="Deleting pod local-path-storage_helper-pod-create-pvc-ca8c5764-63fd-4dd4-a9e0-9769c1ad0f4b from CNI network \"kindnet\" (type=ptp)"
	Sep 04 21:05:01 addons-049370 crio[1043]: time="2025-09-04 21:05:01.771531588Z" level=info msg="Stopped pod sandbox: 1df2dd8ad5dd26c328602fafd24cb6a504999460008dc2ec6fd5a0505df856c4" id=cd0fb8d2-a128-48f7-ae4d-46cd8fe6dfbd name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 04 21:05:14 addons-049370 crio[1043]: time="2025-09-04 21:05:14.399343624Z" level=info msg="Stopping pod sandbox: 1df2dd8ad5dd26c328602fafd24cb6a504999460008dc2ec6fd5a0505df856c4" id=a316ca55-e0fd-4aac-a262-506edde03899 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 04 21:05:14 addons-049370 crio[1043]: time="2025-09-04 21:05:14.399395887Z" level=info msg="Stopped pod sandbox (already stopped): 1df2dd8ad5dd26c328602fafd24cb6a504999460008dc2ec6fd5a0505df856c4" id=a316ca55-e0fd-4aac-a262-506edde03899 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 04 21:05:14 addons-049370 crio[1043]: time="2025-09-04 21:05:14.399634993Z" level=info msg="Removing pod sandbox: 1df2dd8ad5dd26c328602fafd24cb6a504999460008dc2ec6fd5a0505df856c4" id=3431b67b-c8b7-44e5-89cb-0ae9c27c00be name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 04 21:05:14 addons-049370 crio[1043]: time="2025-09-04 21:05:14.405542927Z" level=info msg="Removed pod sandbox: 1df2dd8ad5dd26c328602fafd24cb6a504999460008dc2ec6fd5a0505df856c4" id=3431b67b-c8b7-44e5-89cb-0ae9c27c00be name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	0812830cff5e8       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          7 minutes ago       Running             busybox                                  0                   9db653f3755b4       busybox
	821cbe3252d57       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          7 minutes ago       Running             csi-snapshotter                          0                   f325578cefe27       csi-hostpathplugin-98s7l
	cb89aa1bc3c60       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          7 minutes ago       Running             csi-provisioner                          0                   f325578cefe27       csi-hostpathplugin-98s7l
	c838f8e9fc3db       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            7 minutes ago       Running             liveness-probe                           0                   f325578cefe27       csi-hostpathplugin-98s7l
	d0e1e178f59da       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           7 minutes ago       Running             hostpath                                 0                   f325578cefe27       csi-hostpathplugin-98s7l
	2739b36b07e33       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                7 minutes ago       Running             node-driver-registrar                    0                   f325578cefe27       csi-hostpathplugin-98s7l
	71f3c44efa7ed       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef                             7 minutes ago       Running             controller                               0                   616b907580ffe       ingress-nginx-controller-9cc49f96f-9hj2l
	7edf2c6fe20a3       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506                            7 minutes ago       Running             gadget                                   0                   c4ec61756e1cd       gadget-whkft
	c3cf3d964594d       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   7 minutes ago       Running             csi-external-health-monitor-controller   0                   f325578cefe27       csi-hostpathplugin-98s7l
	5c7cafdaee154       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      7 minutes ago       Running             volume-snapshot-controller               0                   c6e94adfea087       snapshot-controller-7d9fbc56b8-mgxvk
	cf878cd883800       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      7 minutes ago       Running             volume-snapshot-controller               0                   a16557be7ddd8       snapshot-controller-7d9fbc56b8-5d9jh
	3ba8ba2525962       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   7 minutes ago       Exited              patch                                    0                   6e76b5fa98c54       ingress-nginx-admission-patch-gtdvl
	712fefc65d0c1       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             7 minutes ago       Running             csi-attacher                             0                   06eca301ea94b       csi-hostpath-attacher-0
	4d5989f69feeb       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   7 minutes ago       Exited              create                                   0                   8ab625b3a8d0f       ingress-nginx-admission-create-bcplk
	56891fc3e82a7       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              7 minutes ago       Running             csi-resizer                              0                   cd89e12ceb21a       csi-hostpath-resizer-0
	7d2cafb9fbef5       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               8 minutes ago       Running             minikube-ingress-dns                     0                   ab8997b22bdfa       kube-ingress-dns-minikube
	ae86f0dc5f527       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             8 minutes ago       Running             local-path-provisioner                   0                   88ad798d96077       local-path-provisioner-648f6765c9-dlgrh
	5a078a0cc821d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             8 minutes ago       Running             storage-provisioner                      0                   789a7bd2ea563       storage-provisioner
	f34769614a539       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             8 minutes ago       Running             coredns                                  0                   4201e6440890f       coredns-66bc5c9577-m8z9t
	c934f0f4b966c       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             8 minutes ago       Running             kindnet-cni                              0                   15477ade7fdb4       kindnet-7bfb9
	f6a9e9c72d6ba       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                                             8 minutes ago       Running             kube-proxy                               0                   8022b4762a732       kube-proxy-k5lnm
	3f2b5739caaa5       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             9 minutes ago       Running             etcd                                     0                   a0a640c2dfdf7       etcd-addons-049370
	c29c83b9956a1       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                                             9 minutes ago       Running             kube-scheduler                           0                   dcb7c5c1869a2       kube-scheduler-addons-049370
	c5667de904598       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                                             9 minutes ago       Running             kube-controller-manager                  0                   8e65b647d075e       kube-controller-manager-addons-049370
	e754d67808d98       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                                             9 minutes ago       Running             kube-apiserver                           0                   ded69ea3b436b       kube-apiserver-addons-049370
	
	
	==> coredns [f34769614a539f8a9deabe583e02287082f6ea11bf18d071546e1a719cab9a53] <==
	[INFO] 10.244.0.19:55773 - 30973 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.004937086s
	[INFO] 10.244.0.19:54298 - 21135 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004015449s
	[INFO] 10.244.0.19:54298 - 21393 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005610527s
	[INFO] 10.244.0.19:47617 - 25189 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.0041739s
	[INFO] 10.244.0.19:47617 - 25434 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004264819s
	[INFO] 10.244.0.19:52948 - 36411 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000103838s
	[INFO] 10.244.0.19:52948 - 36178 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000155332s
	[INFO] 10.244.0.22:41475 - 45724 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000188468s
	[INFO] 10.244.0.22:38337 - 8650 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000284766s
	[INFO] 10.244.0.22:56826 - 5154 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000151577s
	[INFO] 10.244.0.22:34009 - 20877 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000175171s
	[INFO] 10.244.0.22:47086 - 56751 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000091852s
	[INFO] 10.244.0.22:45919 - 6012 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000095085s
	[INFO] 10.244.0.22:37872 - 33595 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003127291s
	[INFO] 10.244.0.22:58544 - 3234 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003586803s
	[INFO] 10.244.0.22:34813 - 19895 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.004201792s
	[INFO] 10.244.0.22:60988 - 58217 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.005450137s
	[INFO] 10.244.0.22:55463 - 22980 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005077908s
	[INFO] 10.244.0.22:35577 - 40764 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005725827s
	[INFO] 10.244.0.22:55201 - 19501 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00586828s
	[INFO] 10.244.0.22:47590 - 18187 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.009774018s
	[INFO] 10.244.0.22:43687 - 40215 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000894336s
	[INFO] 10.244.0.22:40249 - 16957 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001679777s
	[INFO] 10.244.0.26:43025 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000196657s
	[INFO] 10.244.0.26:48775 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00015597s
	
	
	==> describe nodes <==
	Name:               addons-049370
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-049370
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d82f852837f628b3930700b19196c39855cd258a
	                    minikube.k8s.io/name=addons-049370
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_04T20_56_14_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-049370
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-049370"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Sep 2025 20:56:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-049370
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Sep 2025 21:05:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Sep 2025 21:05:05 +0000   Thu, 04 Sep 2025 20:56:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Sep 2025 21:05:05 +0000   Thu, 04 Sep 2025 20:56:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Sep 2025 21:05:05 +0000   Thu, 04 Sep 2025 20:56:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Sep 2025 21:05:05 +0000   Thu, 04 Sep 2025 20:57:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-049370
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a303a7b5bdc4444fa740fba6d81d7a69
	  System UUID:                e0421e3f-022c-4346-89b0-92bd27eff9ea
	  Boot ID:                    d34ed5fc-a148-45de-9a0e-f744d5f792e8
	  Kernel Version:             5.15.0-1083-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m4s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m47s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  gadget                      gadget-whkft                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m52s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-9hj2l    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         8m52s
	  kube-system                 coredns-66bc5c9577-m8z9t                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     8m56s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m51s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m51s
	  kube-system                 csi-hostpathplugin-98s7l                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m13s
	  kube-system                 etcd-addons-049370                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         9m2s
	  kube-system                 kindnet-7bfb9                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      8m57s
	  kube-system                 kube-apiserver-addons-049370                250m (3%)     0 (0%)      0 (0%)           0 (0%)         9m3s
	  kube-system                 kube-controller-manager-addons-049370       200m (2%)     0 (0%)      0 (0%)           0 (0%)         9m2s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m53s
	  kube-system                 kube-proxy-k5lnm                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m57s
	  kube-system                 kube-scheduler-addons-049370                100m (1%)     0 (0%)      0 (0%)           0 (0%)         9m2s
	  kube-system                 snapshot-controller-7d9fbc56b8-5d9jh        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m50s
	  kube-system                 snapshot-controller-7d9fbc56b8-mgxvk        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m50s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m53s
	  local-path-storage          local-path-provisioner-648f6765c9-dlgrh     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 8m51s  kube-proxy       
	  Normal   Starting                 9m3s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m3s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m2s   kubelet          Node addons-049370 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m2s   kubelet          Node addons-049370 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m2s   kubelet          Node addons-049370 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m58s  node-controller  Node addons-049370 event: Registered Node addons-049370 in Controller
	  Normal   NodeReady                8m13s  kubelet          Node addons-049370 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000004] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +0.000069] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000004] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +1.008573] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000007] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +0.000008] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000003] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000001] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +2.015727] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000007] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +0.000000] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000003] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +4.127589] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000006] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +0.000000] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000002] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +8.191103] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000006] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +0.000017] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000002] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	
	
	==> etcd [3f2b5739caaa53e307caf9baa0ce3898f9c7585d8d2ae3924c36566f18f3e2c1] <==
	{"level":"info","ts":"2025-09-04T20:56:22.648997Z","caller":"traceutil/trace.go:172","msg":"trace[1125510617] transaction","detail":"{read_only:false; number_of_response:1; response_revision:392; }","duration":"385.315266ms","start":"2025-09-04T20:56:22.263655Z","end":"2025-09-04T20:56:22.648970Z","steps":["trace[1125510617] 'process raft request'  (duration: 202.681038ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-04T20:56:22.649181Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-04T20:56:22.263638Z","time spent":"385.424915ms","remote":"127.0.0.1:58690","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":55,"response count":0,"response size":4404,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-66bc5c9577-nqwmj\" mod_revision:350 > success:<request_delete_range:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-nqwmj\" > > failure:<request_range:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-nqwmj\" > >"}
	{"level":"info","ts":"2025-09-04T20:56:22.649436Z","caller":"traceutil/trace.go:172","msg":"trace[495038974] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"284.907787ms","start":"2025-09-04T20:56:22.364514Z","end":"2025-09-04T20:56:22.649422Z","steps":["trace[495038974] 'process raft request'  (duration: 185.809853ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-04T20:56:26.851373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T20:56:26.859519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T20:56:48.396241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T20:56:48.402544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T20:56:48.546682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T20:56:48.553504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55094","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-04T20:57:34.056678Z","caller":"traceutil/trace.go:172","msg":"trace[1745603704] transaction","detail":"{read_only:false; response_revision:1077; number_of_response:1; }","duration":"195.917023ms","start":"2025-09-04T20:57:33.860734Z","end":"2025-09-04T20:57:34.056651Z","steps":["trace[1745603704] 'process raft request'  (duration: 195.733375ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T20:57:34.056844Z","caller":"traceutil/trace.go:172","msg":"trace[278916272] transaction","detail":"{read_only:false; response_revision:1079; number_of_response:1; }","duration":"106.019959ms","start":"2025-09-04T20:57:33.950812Z","end":"2025-09-04T20:57:34.056832Z","steps":["trace[278916272] 'process raft request'  (duration: 105.809945ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T20:57:34.057108Z","caller":"traceutil/trace.go:172","msg":"trace[325232300] transaction","detail":"{read_only:false; response_revision:1078; number_of_response:1; }","duration":"111.429632ms","start":"2025-09-04T20:57:33.945667Z","end":"2025-09-04T20:57:34.057097Z","steps":["trace[325232300] 'process raft request'  (duration: 110.913633ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T20:58:13.409295Z","caller":"traceutil/trace.go:172","msg":"trace[1613289946] transaction","detail":"{read_only:false; response_revision:1261; number_of_response:1; }","duration":"116.026957ms","start":"2025-09-04T20:58:13.293249Z","end":"2025-09-04T20:58:13.409276Z","steps":["trace[1613289946] 'process raft request'  (duration: 115.923335ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-04T20:58:30.444195Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"127.156243ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-04T20:58:30.444268Z","caller":"traceutil/trace.go:172","msg":"trace[447435360] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotclasses; range_end:; response_count:0; response_revision:1361; }","duration":"127.240307ms","start":"2025-09-04T20:58:30.317014Z","end":"2025-09-04T20:58:30.444254Z","steps":["trace[447435360] 'agreement among raft nodes before linearized reading'  (duration: 44.055437ms)","trace[447435360] 'range keys from in-memory index tree'  (duration: 83.073918ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-04T20:58:30.444209Z","caller":"traceutil/trace.go:172","msg":"trace[2038104867] transaction","detail":"{read_only:false; response_revision:1362; number_of_response:1; }","duration":"131.730807ms","start":"2025-09-04T20:58:30.312459Z","end":"2025-09-04T20:58:30.444190Z","steps":["trace[2038104867] 'process raft request'  (duration: 48.653692ms)","trace[2038104867] 'compare'  (duration: 82.954949ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-04T20:58:30.642255Z","caller":"traceutil/trace.go:172","msg":"trace[822073905] transaction","detail":"{read_only:false; response_revision:1367; number_of_response:1; }","duration":"111.76712ms","start":"2025-09-04T20:58:30.530471Z","end":"2025-09-04T20:58:30.642238Z","steps":["trace[822073905] 'process raft request'  (duration: 111.724252ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T20:58:30.642413Z","caller":"traceutil/trace.go:172","msg":"trace[444267485] transaction","detail":"{read_only:false; response_revision:1366; number_of_response:1; }","duration":"173.99749ms","start":"2025-09-04T20:58:30.468390Z","end":"2025-09-04T20:58:30.642388Z","steps":["trace[444267485] 'process raft request'  (duration: 79.867478ms)","trace[444267485] 'compare'  (duration: 93.793378ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-04T20:58:30.701102Z","caller":"traceutil/trace.go:172","msg":"trace[1596308249] transaction","detail":"{read_only:false; response_revision:1368; number_of_response:1; }","duration":"114.975474ms","start":"2025-09-04T20:58:30.586109Z","end":"2025-09-04T20:58:30.701084Z","steps":["trace[1596308249] 'process raft request'  (duration: 114.88463ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-04T20:58:30.831980Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.440231ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-04T20:58:30.832128Z","caller":"traceutil/trace.go:172","msg":"trace[1697395868] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:1369; }","duration":"126.599135ms","start":"2025-09-04T20:58:30.705510Z","end":"2025-09-04T20:58:30.832110Z","steps":["trace[1697395868] 'agreement among raft nodes before linearized reading'  (duration: 66.572708ms)","trace[1697395868] 'range keys from in-memory index tree'  (duration: 59.837538ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-04T20:58:30.832171Z","caller":"traceutil/trace.go:172","msg":"trace[645371951] transaction","detail":"{read_only:false; response_revision:1370; number_of_response:1; }","duration":"127.267701ms","start":"2025-09-04T20:58:30.704881Z","end":"2025-09-04T20:58:30.832149Z","steps":["trace[645371951] 'process raft request'  (duration: 67.258003ms)","trace[645371951] 'compare'  (duration: 59.843945ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-04T20:58:30.832360Z","caller":"traceutil/trace.go:172","msg":"trace[1978939171] transaction","detail":"{read_only:false; response_revision:1371; number_of_response:1; }","duration":"127.399776ms","start":"2025-09-04T20:58:30.704948Z","end":"2025-09-04T20:58:30.832348Z","steps":["trace[1978939171] 'process raft request'  (duration: 127.166902ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T20:58:30.832409Z","caller":"traceutil/trace.go:172","msg":"trace[1686127060] transaction","detail":"{read_only:false; response_revision:1372; number_of_response:1; }","duration":"126.865141ms","start":"2025-09-04T20:58:30.705526Z","end":"2025-09-04T20:58:30.832392Z","steps":["trace[1686127060] 'process raft request'  (duration: 126.765828ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T20:58:30.836024Z","caller":"traceutil/trace.go:172","msg":"trace[1408390840] transaction","detail":"{read_only:false; response_revision:1373; number_of_response:1; }","duration":"126.512815ms","start":"2025-09-04T20:58:30.705957Z","end":"2025-09-04T20:58:30.832469Z","steps":["trace[1408390840] 'process raft request'  (duration: 126.396725ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:05:16 up  2:47,  0 users,  load average: 0.25, 0.55, 0.51
	Linux addons-049370 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [c934f0f4b966c80bea5021ff2cd61d60fc1f09abb35b790b7fa2c052eb648772] <==
	I0904 21:03:13.567736       1 main.go:301] handling current node
	I0904 21:03:23.572888       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:03:23.572931       1 main.go:301] handling current node
	I0904 21:03:33.568832       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:03:33.568861       1 main.go:301] handling current node
	I0904 21:03:43.567370       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:03:43.567398       1 main.go:301] handling current node
	I0904 21:03:53.567483       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:03:53.567532       1 main.go:301] handling current node
	I0904 21:04:03.569552       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:04:03.569594       1 main.go:301] handling current node
	I0904 21:04:13.568888       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:04:13.568924       1 main.go:301] handling current node
	I0904 21:04:23.572899       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:04:23.572931       1 main.go:301] handling current node
	I0904 21:04:33.572869       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:04:33.572909       1 main.go:301] handling current node
	I0904 21:04:43.567579       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:04:43.567610       1 main.go:301] handling current node
	I0904 21:04:53.572848       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:04:53.572882       1 main.go:301] handling current node
	I0904 21:05:03.568245       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:05:03.568285       1 main.go:301] handling current node
	I0904 21:05:13.568869       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:05:13.568906       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e754d67808d98a38d816120e6f2508d9bc342968fa147d926ff9d362a0796737] <==
	E0904 20:57:31.385076       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.22.155:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.22.155:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	E0904 20:57:31.385084       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0904 20:57:31.395899       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0904 20:57:38.465684       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0904 20:58:21.163245       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:60624: use of closed network connection
	E0904 20:58:21.316649       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:60648: use of closed network connection
	I0904 20:58:29.788002       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0904 20:58:29.995932       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.217.161"}
	I0904 20:58:30.465191       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.103.10"}
	I0904 20:58:31.764493       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 20:58:45.951516       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 20:59:32.395923       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0904 20:59:34.466474       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:00:11.575196       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:00:37.217035       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:01:12.003193       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:01:57.163986       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:02:17.079567       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:03:09.847622       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:03:20.654283       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:04:19.146103       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:04:32.303972       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [c5667de904598d16bc7b2fd5cfcd19280dc33b7d377dd608e1fc9961af9c518c] <==
	I0904 20:56:18.365258       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0904 20:56:18.365795       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0904 20:56:18.366547       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0904 20:56:18.367665       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0904 20:56:18.367674       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0904 20:56:18.378969       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0904 20:56:18.429609       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0904 20:56:18.463438       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0904 20:56:18.463461       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0904 20:56:18.463467       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0904 20:56:18.530779       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0904 20:56:23.945861       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E0904 20:56:48.371672       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 20:56:48.371810       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0904 20:56:48.371857       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0904 20:56:48.472497       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0904 20:56:48.537614       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0904 20:56:48.541310       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0904 20:56:48.641464       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0904 20:57:08.353437       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E0904 20:57:18.477549       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 20:57:18.648659       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0904 20:58:34.483247       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I0904 20:59:09.361673       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	I0904 20:59:22.484206       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	
	
	==> kube-proxy [f6a9e9c72d6babda359c890098381bd848b231b9b281facb3f3cdc9763aee908] <==
	I0904 20:56:23.263174       1 server_linux.go:53] "Using iptables proxy"
	I0904 20:56:23.846890       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0904 20:56:23.948000       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0904 20:56:23.948116       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0904 20:56:23.948247       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 20:56:24.347256       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0904 20:56:24.347395       1 server_linux.go:132] "Using iptables Proxier"
	I0904 20:56:24.361570       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 20:56:24.362683       1 server.go:527] "Version info" version="v1.34.0"
	I0904 20:56:24.362781       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 20:56:24.364537       1 config.go:200] "Starting service config controller"
	I0904 20:56:24.364555       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0904 20:56:24.364576       1 config.go:106] "Starting endpoint slice config controller"
	I0904 20:56:24.364583       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0904 20:56:24.364619       1 config.go:403] "Starting serviceCIDR config controller"
	I0904 20:56:24.364629       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0904 20:56:24.365511       1 config.go:309] "Starting node config controller"
	I0904 20:56:24.365557       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0904 20:56:24.365570       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0904 20:56:24.465478       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0904 20:56:24.465535       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0904 20:56:24.465550       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [c29c83b9956a13fe199c44a49b15dba2a1c0c21d5ba02c6402f6f23568614412] <==
	E0904 20:56:11.467729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0904 20:56:11.473120       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0904 20:56:11.473221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0904 20:56:11.473483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0904 20:56:11.473678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0904 20:56:11.473762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0904 20:56:11.473851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0904 20:56:11.473905       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0904 20:56:11.473951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0904 20:56:11.474028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0904 20:56:11.474102       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0904 20:56:11.474173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0904 20:56:11.474244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0904 20:56:11.474321       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0904 20:56:11.474380       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0904 20:56:11.474468       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0904 20:56:11.474521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0904 20:56:11.475320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0904 20:56:11.478116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0904 20:56:12.362165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0904 20:56:12.378813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0904 20:56:12.396736       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0904 20:56:12.484405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0904 20:56:12.588679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0904 20:56:15.667534       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 04 21:04:54 addons-049370 kubelet[1676]: E0904 21:04:54.338289    1676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757019894338045891  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:518983}  inodes_used:{value:206}}"
	Sep 04 21:05:01 addons-049370 kubelet[1676]: E0904 21:05:01.621250    1676 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 04 21:05:01 addons-049370 kubelet[1676]: E0904 21:05:01.621317    1676 kuberuntime_image.go:43] "Failed to pull image" err="loading manifest for target platform: reading manifest sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 04 21:05:01 addons-049370 kubelet[1676]: E0904 21:05:01.621588    1676 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-ca8c5764-63fd-4dd4-a9e0-9769c1ad0f4b_local-path-storage(96904e25-b0d6-4506-8c7c-03307f38bc2b): ErrImagePull: loading manifest for target platform: reading manifest sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 04 21:05:01 addons-049370 kubelet[1676]: E0904 21:05:01.621657    1676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"loading manifest for target platform: reading manifest sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-ca8c5764-63fd-4dd4-a9e0-9769c1ad0f4b" podUID="96904e25-b0d6-4506-8c7c-03307f38bc2b"
	Sep 04 21:05:01 addons-049370 kubelet[1676]: I0904 21:05:01.834544    1676 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/96904e25-b0d6-4506-8c7c-03307f38bc2b-data\") pod \"96904e25-b0d6-4506-8c7c-03307f38bc2b\" (UID: \"96904e25-b0d6-4506-8c7c-03307f38bc2b\") "
	Sep 04 21:05:01 addons-049370 kubelet[1676]: I0904 21:05:01.834625    1676 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6fmz\" (UniqueName: \"kubernetes.io/projected/96904e25-b0d6-4506-8c7c-03307f38bc2b-kube-api-access-d6fmz\") pod \"96904e25-b0d6-4506-8c7c-03307f38bc2b\" (UID: \"96904e25-b0d6-4506-8c7c-03307f38bc2b\") "
	Sep 04 21:05:01 addons-049370 kubelet[1676]: I0904 21:05:01.834678    1676 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96904e25-b0d6-4506-8c7c-03307f38bc2b-data" (OuterVolumeSpecName: "data") pod "96904e25-b0d6-4506-8c7c-03307f38bc2b" (UID: "96904e25-b0d6-4506-8c7c-03307f38bc2b"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Sep 04 21:05:01 addons-049370 kubelet[1676]: I0904 21:05:01.834709    1676 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/96904e25-b0d6-4506-8c7c-03307f38bc2b-script\") pod \"96904e25-b0d6-4506-8c7c-03307f38bc2b\" (UID: \"96904e25-b0d6-4506-8c7c-03307f38bc2b\") "
	Sep 04 21:05:01 addons-049370 kubelet[1676]: I0904 21:05:01.834835    1676 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/96904e25-b0d6-4506-8c7c-03307f38bc2b-data\") on node \"addons-049370\" DevicePath \"\""
	Sep 04 21:05:01 addons-049370 kubelet[1676]: I0904 21:05:01.835001    1676 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96904e25-b0d6-4506-8c7c-03307f38bc2b-script" (OuterVolumeSpecName: "script") pod "96904e25-b0d6-4506-8c7c-03307f38bc2b" (UID: "96904e25-b0d6-4506-8c7c-03307f38bc2b"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Sep 04 21:05:01 addons-049370 kubelet[1676]: I0904 21:05:01.836551    1676 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96904e25-b0d6-4506-8c7c-03307f38bc2b-kube-api-access-d6fmz" (OuterVolumeSpecName: "kube-api-access-d6fmz") pod "96904e25-b0d6-4506-8c7c-03307f38bc2b" (UID: "96904e25-b0d6-4506-8c7c-03307f38bc2b"). InnerVolumeSpecName "kube-api-access-d6fmz". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Sep 04 21:05:01 addons-049370 kubelet[1676]: I0904 21:05:01.935100    1676 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d6fmz\" (UniqueName: \"kubernetes.io/projected/96904e25-b0d6-4506-8c7c-03307f38bc2b-kube-api-access-d6fmz\") on node \"addons-049370\" DevicePath \"\""
	Sep 04 21:05:01 addons-049370 kubelet[1676]: I0904 21:05:01.935133    1676 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/96904e25-b0d6-4506-8c7c-03307f38bc2b-script\") on node \"addons-049370\" DevicePath \"\""
	Sep 04 21:05:03 addons-049370 kubelet[1676]: I0904 21:05:03.979726    1676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96904e25-b0d6-4506-8c7c-03307f38bc2b" path="/var/lib/kubelet/pods/96904e25-b0d6-4506-8c7c-03307f38bc2b/volumes"
	Sep 04 21:05:04 addons-049370 kubelet[1676]: E0904 21:05:04.340777    1676 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757019904340511571  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:518983}  inodes_used:{value:206}}"
	Sep 04 21:05:04 addons-049370 kubelet[1676]: E0904 21:05:04.340814    1676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757019904340511571  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:518983}  inodes_used:{value:206}}"
	Sep 04 21:05:14 addons-049370 kubelet[1676]: E0904 21:05:14.086207    1676 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/1420df6ecb84c17a3dccc200e291f04198580e963225217211bac4a8db68a6f0/diff" to get inode usage: stat /var/lib/containers/storage/overlay/1420df6ecb84c17a3dccc200e291f04198580e963225217211bac4a8db68a6f0/diff: no such file or directory, extraDiskErr: <nil>
	Sep 04 21:05:14 addons-049370 kubelet[1676]: E0904 21:05:14.091283    1676 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/b7edb16fb0b11eef82194173a59aafc6330f8b4acf0285c961c901faaad650a1/diff" to get inode usage: stat /var/lib/containers/storage/overlay/b7edb16fb0b11eef82194173a59aafc6330f8b4acf0285c961c901faaad650a1/diff: no such file or directory, extraDiskErr: <nil>
	Sep 04 21:05:14 addons-049370 kubelet[1676]: E0904 21:05:14.093414    1676 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/b7edb16fb0b11eef82194173a59aafc6330f8b4acf0285c961c901faaad650a1/diff" to get inode usage: stat /var/lib/containers/storage/overlay/b7edb16fb0b11eef82194173a59aafc6330f8b4acf0285c961c901faaad650a1/diff: no such file or directory, extraDiskErr: <nil>
	Sep 04 21:05:14 addons-049370 kubelet[1676]: E0904 21:05:14.150382    1676 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/1420df6ecb84c17a3dccc200e291f04198580e963225217211bac4a8db68a6f0/diff" to get inode usage: stat /var/lib/containers/storage/overlay/1420df6ecb84c17a3dccc200e291f04198580e963225217211bac4a8db68a6f0/diff: no such file or directory, extraDiskErr: <nil>
	Sep 04 21:05:14 addons-049370 kubelet[1676]: E0904 21:05:14.162274    1676 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a47f6b029006ae64d2128463bd52c4220348f754efd51fb9e9cc0f8b1c9d182f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a47f6b029006ae64d2128463bd52c4220348f754efd51fb9e9cc0f8b1c9d182f/diff: no such file or directory, extraDiskErr: <nil>
	Sep 04 21:05:14 addons-049370 kubelet[1676]: E0904 21:05:14.169566    1676 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a47f6b029006ae64d2128463bd52c4220348f754efd51fb9e9cc0f8b1c9d182f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a47f6b029006ae64d2128463bd52c4220348f754efd51fb9e9cc0f8b1c9d182f/diff: no such file or directory, extraDiskErr: <nil>
	Sep 04 21:05:14 addons-049370 kubelet[1676]: E0904 21:05:14.343307    1676 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757019914343098908  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:518983}  inodes_used:{value:206}}"
	Sep 04 21:05:14 addons-049370 kubelet[1676]: E0904 21:05:14.343337    1676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757019914343098908  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:518983}  inodes_used:{value:206}}"
	
	
	==> storage-provisioner [5a078a0cc821dc014bcb985333d5bbfa410ad383f9567686488e54f4bdadf77c] <==
	W0904 21:04:50.882384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:04:52.885198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:04:52.889961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:04:54.893065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:04:54.897757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:04:56.900936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:04:56.904686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:04:58.907558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:04:58.912434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:05:00.915504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:05:00.919232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:05:02.922137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:05:02.925723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:05:04.929017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:05:04.932687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:05:06.935670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:05:06.939382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:05:08.942428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:05:08.946052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:05:10.948970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:05:10.952478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:05:12.955101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:05:12.958944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:05:14.961825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:05:14.965842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-049370 -n addons-049370
helpers_test.go:269: (dbg) Run:  kubectl --context addons-049370 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-bcplk ingress-nginx-admission-patch-gtdvl
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-049370 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-bcplk ingress-nginx-admission-patch-gtdvl
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-049370 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-bcplk ingress-nginx-admission-patch-gtdvl: exit status 1 (75.203958ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-049370/192.168.49.2
	Start Time:       Thu, 04 Sep 2025 20:58:29 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.24
	IPs:
	  IP:  10.244.0.24
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6ptm9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6ptm9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m48s                  default-scheduler  Successfully assigned default/nginx to addons-049370
	  Warning  Failed     6m16s                  kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m18s (x3 over 6m16s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m18s (x2 over 4m21s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    101s (x5 over 6m16s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     101s (x5 over 6m16s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    90s (x4 over 6m47s)    kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-049370/192.168.49.2
	Start Time:       Thu, 04 Sep 2025 20:59:14 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hr2vm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-hr2vm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m3s                 default-scheduler  Successfully assigned default/task-pv-pod to addons-049370
	  Warning  Failed     2m49s                kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:f15190cd0aed34df2541e6a569d349858dd83fe2a519d7c0ec023133b6d3c4f7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     77s (x2 over 4m52s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     77s (x3 over 4m52s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    40s (x5 over 4m51s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     40s (x5 over 4m51s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    27s (x4 over 6m2s)   kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hzwmn (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-hzwmn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-bcplk" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-gtdvl" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-049370 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-bcplk ingress-nginx-admission-patch-gtdvl: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-049370 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-049370 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-049370 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.642633589s)
--- FAIL: TestAddons/parallel/CSI (378.98s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (302.5s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-049370 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-049370 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Non-zero exit: kubectl --context addons-049370 get pvc test-pvc -o jsonpath={.status.phase} -n default: context deadline exceeded (832ns)
helpers_test.go:404: TestAddons/parallel/LocalPath: WARNING: PVC get for "default" "test-pvc" returned: context deadline exceeded
addons_test.go:960: failed waiting for PVC test-pvc: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/LocalPath]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/LocalPath]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-049370
helpers_test.go:243: (dbg) docker inspect addons-049370:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5caec540cec027b6b9e44289fa39b99a9d281f88769b5a3b10a72fbc2efdcfc3",
	        "Created": "2025-09-04T20:55:59.262503813Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 390267,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-04T20:55:59.29310334Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:26724cb1013292a869503ea8c35fa5a932e6ec3e4e85e318411373ae6a24478b",
	        "ResolvConfPath": "/var/lib/docker/containers/5caec540cec027b6b9e44289fa39b99a9d281f88769b5a3b10a72fbc2efdcfc3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5caec540cec027b6b9e44289fa39b99a9d281f88769b5a3b10a72fbc2efdcfc3/hostname",
	        "HostsPath": "/var/lib/docker/containers/5caec540cec027b6b9e44289fa39b99a9d281f88769b5a3b10a72fbc2efdcfc3/hosts",
	        "LogPath": "/var/lib/docker/containers/5caec540cec027b6b9e44289fa39b99a9d281f88769b5a3b10a72fbc2efdcfc3/5caec540cec027b6b9e44289fa39b99a9d281f88769b5a3b10a72fbc2efdcfc3-json.log",
	        "Name": "/addons-049370",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-049370:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-049370",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5caec540cec027b6b9e44289fa39b99a9d281f88769b5a3b10a72fbc2efdcfc3",
	                "LowerDir": "/var/lib/docker/overlay2/dc05b76b54c7cfbf7fa4d96813c5d335fe5daf2168b1a69554f99d939c584ee0-init/diff:/var/lib/docker/overlay2/fcc29a201369e5fb579bfafdafe90606e5b86bd2f4b52beb7f6a97f7e955214d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dc05b76b54c7cfbf7fa4d96813c5d335fe5daf2168b1a69554f99d939c584ee0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dc05b76b54c7cfbf7fa4d96813c5d335fe5daf2168b1a69554f99d939c584ee0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dc05b76b54c7cfbf7fa4d96813c5d335fe5daf2168b1a69554f99d939c584ee0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-049370",
	                "Source": "/var/lib/docker/volumes/addons-049370/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-049370",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-049370",
	                "name.minikube.sigs.k8s.io": "addons-049370",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ebe38700b80a638159b3489df03c5870e9f15ecf00ad219d1d9b3fbc49acec55",
	            "SandboxKey": "/var/run/docker/netns/ebe38700b80a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-049370": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:41:22:73:0f:f1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2048bdf288b9f197869aef65f41d479e8afce6e3ad28d597acd24bc87d544c41",
	                    "EndpointID": "84d0e0934b5175bdbf5a7fed011cc5c5fd5e6125bf967cd744e715e3f5eb7d74",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-049370",
	                        "5caec540cec0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-049370 -n addons-049370
helpers_test.go:252: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-049370 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-049370 logs -n 25: (1.118281205s)
helpers_test.go:260: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │ 04 Sep 25 20:55 UTC │
	│ delete  │ -p download-only-807406                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-807406   │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │ 04 Sep 25 20:55 UTC │
	│ delete  │ -p download-only-640345                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-640345   │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │ 04 Sep 25 20:55 UTC │
	│ delete  │ -p download-only-807406                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-807406   │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │ 04 Sep 25 20:55 UTC │
	│ start   │ --download-only -p download-docker-306069 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-306069 │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │                     │
	│ delete  │ -p download-docker-306069                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-306069 │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │ 04 Sep 25 20:55 UTC │
	│ start   │ --download-only -p binary-mirror-563304 --alsologtostderr --binary-mirror http://127.0.0.1:41655 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-563304   │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │                     │
	│ delete  │ -p binary-mirror-563304                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-563304   │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │ 04 Sep 25 20:55 UTC │
	│ addons  │ disable dashboard -p addons-049370                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │                     │
	│ addons  │ enable dashboard -p addons-049370                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │                     │
	│ start   │ -p addons-049370 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │ 04 Sep 25 20:58 UTC │
	│ addons  │ addons-049370 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:58 UTC │ 04 Sep 25 20:58 UTC │
	│ addons  │ addons-049370 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:58 UTC │ 04 Sep 25 20:58 UTC │
	│ addons  │ enable headlamp -p addons-049370 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:58 UTC │ 04 Sep 25 20:58 UTC │
	│ addons  │ addons-049370 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:58 UTC │ 04 Sep 25 20:58 UTC │
	│ addons  │ addons-049370 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:58 UTC │ 04 Sep 25 20:58 UTC │
	│ addons  │ addons-049370 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:58 UTC │ 04 Sep 25 20:58 UTC │
	│ addons  │ addons-049370 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:58 UTC │ 04 Sep 25 20:58 UTC │
	│ addons  │ addons-049370 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:58 UTC │ 04 Sep 25 20:59 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-049370                                                                                                                                                                                                                                                                                                                                                                                           │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:59 UTC │ 04 Sep 25 20:59 UTC │
	│ addons  │ addons-049370 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:59 UTC │ 04 Sep 25 20:59 UTC │
	│ ip      │ addons-049370 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:59 UTC │ 04 Sep 25 20:59 UTC │
	│ addons  │ addons-049370 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:59 UTC │ 04 Sep 25 20:59 UTC │
	│ addons  │ addons-049370 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:59 UTC │ 04 Sep 25 20:59 UTC │
	│ addons  │ addons-049370 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-049370          │ jenkins │ v1.36.0 │ 04 Sep 25 20:59 UTC │ 04 Sep 25 20:59 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 20:55:35
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 20:55:35.931187  389648 out.go:360] Setting OutFile to fd 1 ...
	I0904 20:55:35.931440  389648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 20:55:35.931451  389648 out.go:374] Setting ErrFile to fd 2...
	I0904 20:55:35.931458  389648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 20:55:35.931653  389648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-384635/.minikube/bin
	I0904 20:55:35.932252  389648 out.go:368] Setting JSON to false
	I0904 20:55:35.933194  389648 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9485,"bootTime":1757009851,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 20:55:35.933295  389648 start.go:140] virtualization: kvm guest
	I0904 20:55:35.935053  389648 out.go:179] * [addons-049370] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 20:55:35.936502  389648 out.go:179]   - MINIKUBE_LOCATION=21490
	I0904 20:55:35.936515  389648 notify.go:220] Checking for updates...
	I0904 20:55:35.938589  389648 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 20:55:35.939875  389648 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21490-384635/kubeconfig
	I0904 20:55:35.941016  389648 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-384635/.minikube
	I0904 20:55:35.942120  389648 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 20:55:35.943340  389648 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 20:55:35.944678  389648 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 20:55:35.967955  389648 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 20:55:35.968038  389648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 20:55:36.013884  389648 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:48 SystemTime:2025-09-04 20:55:36.00384503 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 20:55:36.013990  389648 docker.go:318] overlay module found
	I0904 20:55:36.015880  389648 out.go:179] * Using the docker driver based on user configuration
	I0904 20:55:36.017259  389648 start.go:304] selected driver: docker
	I0904 20:55:36.017279  389648 start.go:918] validating driver "docker" against <nil>
	I0904 20:55:36.017301  389648 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 20:55:36.018181  389648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 20:55:36.061743  389648 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:48 SystemTime:2025-09-04 20:55:36.053555345 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 20:55:36.061946  389648 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0904 20:55:36.062186  389648 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 20:55:36.063851  389648 out.go:179] * Using Docker driver with root privileges
	I0904 20:55:36.065032  389648 cni.go:84] Creating CNI manager for ""
	I0904 20:55:36.065096  389648 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 20:55:36.065109  389648 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0904 20:55:36.065189  389648 start.go:348] cluster config:
	{Name:addons-049370 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-049370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0904 20:55:36.066545  389648 out.go:179] * Starting "addons-049370" primary control-plane node in "addons-049370" cluster
	I0904 20:55:36.067696  389648 cache.go:123] Beginning downloading kic base image for docker with crio
	I0904 20:55:36.068952  389648 out.go:179] * Pulling base image v0.0.47-1756116447-21413 ...
	I0904 20:55:36.070027  389648 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 20:55:36.070067  389648 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21490-384635/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0904 20:55:36.070084  389648 cache.go:58] Caching tarball of preloaded images
	I0904 20:55:36.070129  389648 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 in local docker daemon
	I0904 20:55:36.070184  389648 preload.go:172] Found /home/jenkins/minikube-integration/21490-384635/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0904 20:55:36.070196  389648 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0904 20:55:36.070509  389648 profile.go:143] Saving config to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/config.json ...
	I0904 20:55:36.070535  389648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/config.json: {Name:mkeaddf16ea076f194194c7e6e0eb8ad847648bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:55:36.085707  389648 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 to local cache
	I0904 20:55:36.085814  389648 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 in local cache directory
	I0904 20:55:36.085830  389648 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 in local cache directory, skipping pull
	I0904 20:55:36.085834  389648 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 exists in cache, skipping pull
	I0904 20:55:36.085841  389648 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 as a tarball
	I0904 20:55:36.085848  389648 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 from local cache
	I0904 20:55:47.569774  389648 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 from cached tarball
	I0904 20:55:47.569822  389648 cache.go:232] Successfully downloaded all kic artifacts
	I0904 20:55:47.569872  389648 start.go:360] acquireMachinesLock for addons-049370: {Name:mk8e52f32278895920c6de02ca736f9f45438008 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 20:55:47.569963  389648 start.go:364] duration metric: took 71.514µs to acquireMachinesLock for "addons-049370"
	I0904 20:55:47.569986  389648 start.go:93] Provisioning new machine with config: &{Name:addons-049370 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-049370 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 20:55:47.570051  389648 start.go:125] createHost starting for "" (driver="docker")
	I0904 20:55:47.571722  389648 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0904 20:55:47.571956  389648 start.go:159] libmachine.API.Create for "addons-049370" (driver="docker")
	I0904 20:55:47.571986  389648 client.go:168] LocalClient.Create starting
	I0904 20:55:47.572093  389648 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem
	I0904 20:55:47.750984  389648 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/cert.pem
	I0904 20:55:47.850792  389648 cli_runner.go:164] Run: docker network inspect addons-049370 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0904 20:55:47.867272  389648 cli_runner.go:211] docker network inspect addons-049370 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0904 20:55:47.867344  389648 network_create.go:284] running [docker network inspect addons-049370] to gather additional debugging logs...
	I0904 20:55:47.867369  389648 cli_runner.go:164] Run: docker network inspect addons-049370
	W0904 20:55:47.882593  389648 cli_runner.go:211] docker network inspect addons-049370 returned with exit code 1
	I0904 20:55:47.882619  389648 network_create.go:287] error running [docker network inspect addons-049370]: docker network inspect addons-049370: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-049370 not found
	I0904 20:55:47.882643  389648 network_create.go:289] output of [docker network inspect addons-049370]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-049370 not found
	
	** /stderr **
	I0904 20:55:47.882767  389648 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0904 20:55:47.897896  389648 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f29240}
	I0904 20:55:47.897941  389648 network_create.go:124] attempt to create docker network addons-049370 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0904 20:55:47.897989  389648 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-049370 addons-049370
	I0904 20:55:47.946511  389648 network_create.go:108] docker network addons-049370 192.168.49.0/24 created
	I0904 20:55:47.946541  389648 kic.go:121] calculated static IP "192.168.49.2" for the "addons-049370" container
	I0904 20:55:47.946616  389648 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0904 20:55:47.961507  389648 cli_runner.go:164] Run: docker volume create addons-049370 --label name.minikube.sigs.k8s.io=addons-049370 --label created_by.minikube.sigs.k8s.io=true
	I0904 20:55:47.977348  389648 oci.go:103] Successfully created a docker volume addons-049370
	I0904 20:55:47.977414  389648 cli_runner.go:164] Run: docker run --rm --name addons-049370-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-049370 --entrypoint /usr/bin/test -v addons-049370:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 -d /var/lib
	I0904 20:55:54.908931  389648 cli_runner.go:217] Completed: docker run --rm --name addons-049370-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-049370 --entrypoint /usr/bin/test -v addons-049370:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 -d /var/lib: (6.931464681s)
	I0904 20:55:54.908963  389648 oci.go:107] Successfully prepared a docker volume addons-049370
	I0904 20:55:54.908988  389648 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 20:55:54.909014  389648 kic.go:194] Starting extracting preloaded images to volume ...
	I0904 20:55:54.909085  389648 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21490-384635/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-049370:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 -I lz4 -xf /preloaded.tar -C /extractDir
	I0904 20:55:59.203486  389648 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21490-384635/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-049370:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 -I lz4 -xf /preloaded.tar -C /extractDir: (4.294349299s)
	I0904 20:55:59.203526  389648 kic.go:203] duration metric: took 4.294508066s to extract preloaded images to volume ...
	W0904 20:55:59.203673  389648 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0904 20:55:59.203816  389648 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0904 20:55:59.248150  389648 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-049370 --name addons-049370 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-049370 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-049370 --network addons-049370 --ip 192.168.49.2 --volume addons-049370:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9
	I0904 20:55:59.483162  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Running}}
	I0904 20:55:59.500560  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:55:59.519189  389648 cli_runner.go:164] Run: docker exec addons-049370 stat /var/lib/dpkg/alternatives/iptables
	I0904 20:55:59.559150  389648 oci.go:144] the created container "addons-049370" has a running status.
	I0904 20:55:59.559182  389648 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa...
	I0904 20:55:59.730819  389648 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0904 20:55:59.749901  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:55:59.769336  389648 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0904 20:55:59.769365  389648 kic_runner.go:114] Args: [docker exec --privileged addons-049370 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0904 20:55:59.858697  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:55:59.878986  389648 machine.go:93] provisionDockerMachine start ...
	I0904 20:55:59.879111  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:55:59.900388  389648 main.go:141] libmachine: Using SSH client type: native
	I0904 20:55:59.900618  389648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33145 <nil> <nil>}
	I0904 20:55:59.900630  389648 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 20:56:00.092134  389648 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-049370
	
	I0904 20:56:00.092166  389648 ubuntu.go:182] provisioning hostname "addons-049370"
	I0904 20:56:00.092222  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:00.110942  389648 main.go:141] libmachine: Using SSH client type: native
	I0904 20:56:00.111171  389648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33145 <nil> <nil>}
	I0904 20:56:00.111192  389648 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-049370 && echo "addons-049370" | sudo tee /etc/hostname
	I0904 20:56:00.235028  389648 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-049370
	
	I0904 20:56:00.235115  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:00.254182  389648 main.go:141] libmachine: Using SSH client type: native
	I0904 20:56:00.254444  389648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33145 <nil> <nil>}
	I0904 20:56:00.254463  389648 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-049370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-049370/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-049370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 20:56:00.364487  389648 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 20:56:00.364528  389648 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21490-384635/.minikube CaCertPath:/home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21490-384635/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21490-384635/.minikube}
	I0904 20:56:00.364564  389648 ubuntu.go:190] setting up certificates
	I0904 20:56:00.364581  389648 provision.go:84] configureAuth start
	I0904 20:56:00.364638  389648 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-049370
	I0904 20:56:00.380933  389648 provision.go:143] copyHostCerts
	I0904 20:56:00.381007  389648 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21490-384635/.minikube/ca.pem (1078 bytes)
	I0904 20:56:00.381110  389648 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21490-384635/.minikube/cert.pem (1123 bytes)
	I0904 20:56:00.381171  389648 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21490-384635/.minikube/key.pem (1675 bytes)
	I0904 20:56:00.381291  389648 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21490-384635/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca-key.pem org=jenkins.addons-049370 san=[127.0.0.1 192.168.49.2 addons-049370 localhost minikube]
	I0904 20:56:00.582774  389648 provision.go:177] copyRemoteCerts
	I0904 20:56:00.582833  389648 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 20:56:00.582888  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:00.600896  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:00.685189  389648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0904 20:56:00.706872  389648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0904 20:56:00.727318  389648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0904 20:56:00.747581  389648 provision.go:87] duration metric: took 382.988372ms to configureAuth
	I0904 20:56:00.747609  389648 ubuntu.go:206] setting minikube options for container-runtime
	I0904 20:56:00.747766  389648 config.go:182] Loaded profile config "addons-049370": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 20:56:00.747906  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:00.764149  389648 main.go:141] libmachine: Using SSH client type: native
	I0904 20:56:00.764350  389648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33145 <nil> <nil>}
	I0904 20:56:00.764368  389648 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0904 20:56:00.958932  389648 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0904 20:56:00.958968  389648 machine.go:96] duration metric: took 1.079954584s to provisionDockerMachine
	I0904 20:56:00.958982  389648 client.go:171] duration metric: took 13.386987071s to LocalClient.Create
	I0904 20:56:00.959009  389648 start.go:167] duration metric: took 13.387053802s to libmachine.API.Create "addons-049370"
	I0904 20:56:00.959025  389648 start.go:293] postStartSetup for "addons-049370" (driver="docker")
	I0904 20:56:00.959040  389648 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 20:56:00.959109  389648 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 20:56:00.959158  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:00.975608  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:01.061278  389648 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 20:56:01.064210  389648 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0904 20:56:01.064237  389648 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0904 20:56:01.064244  389648 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0904 20:56:01.064251  389648 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0904 20:56:01.064263  389648 filesync.go:126] Scanning /home/jenkins/minikube-integration/21490-384635/.minikube/addons for local assets ...
	I0904 20:56:01.064321  389648 filesync.go:126] Scanning /home/jenkins/minikube-integration/21490-384635/.minikube/files for local assets ...
	I0904 20:56:01.064347  389648 start.go:296] duration metric: took 105.314476ms for postStartSetup
	I0904 20:56:01.064647  389648 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-049370
	I0904 20:56:01.081390  389648 profile.go:143] Saving config to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/config.json ...
	I0904 20:56:01.081619  389648 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 20:56:01.081659  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:01.098242  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:01.177520  389648 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0904 20:56:01.181443  389648 start.go:128] duration metric: took 13.611378177s to createHost
	I0904 20:56:01.181464  389648 start.go:83] releasing machines lock for "addons-049370", held for 13.611489751s
	I0904 20:56:01.181518  389648 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-049370
	I0904 20:56:01.197665  389648 ssh_runner.go:195] Run: cat /version.json
	I0904 20:56:01.197712  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:01.197747  389648 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 20:56:01.197832  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:01.217406  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:01.217960  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:01.369596  389648 ssh_runner.go:195] Run: systemctl --version
	I0904 20:56:01.373474  389648 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0904 20:56:01.509565  389648 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0904 20:56:01.513834  389648 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 20:56:01.530180  389648 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0904 20:56:01.530256  389648 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 20:56:01.553751  389648 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0904 20:56:01.553778  389648 start.go:495] detecting cgroup driver to use...
	I0904 20:56:01.553812  389648 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0904 20:56:01.553868  389648 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 20:56:01.567182  389648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 20:56:01.576378  389648 docker.go:218] disabling cri-docker service (if available) ...
	I0904 20:56:01.576432  389648 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0904 20:56:01.587988  389648 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0904 20:56:01.599829  389648 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0904 20:56:01.673115  389648 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0904 20:56:01.753644  389648 docker.go:234] disabling docker service ...
	I0904 20:56:01.753708  389648 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0904 20:56:01.770449  389648 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0904 20:56:01.780079  389648 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0904 20:56:01.852634  389648 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0904 20:56:01.929656  389648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 20:56:01.939388  389648 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 20:56:01.953483  389648 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0904 20:56:01.953533  389648 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:56:01.961514  389648 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0904 20:56:01.961581  389648 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:56:01.969587  389648 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:56:01.977328  389648 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:56:01.985460  389648 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 20:56:01.992893  389648 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:56:02.000897  389648 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:56:02.014229  389648 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:56:02.022636  389648 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 20:56:02.029801  389648 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 20:56:02.036815  389648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 20:56:02.107470  389648 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0904 20:56:02.204181  389648 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0904 20:56:02.204269  389648 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0904 20:56:02.207556  389648 start.go:563] Will wait 60s for crictl version
	I0904 20:56:02.207613  389648 ssh_runner.go:195] Run: which crictl
	I0904 20:56:02.210531  389648 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 20:56:02.242395  389648 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0904 20:56:02.242466  389648 ssh_runner.go:195] Run: crio --version
	I0904 20:56:02.275988  389648 ssh_runner.go:195] Run: crio --version
	I0904 20:56:02.310411  389648 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0904 20:56:02.311905  389648 cli_runner.go:164] Run: docker network inspect addons-049370 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0904 20:56:02.327725  389648 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0904 20:56:02.331056  389648 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 20:56:02.340959  389648 kubeadm.go:875] updating cluster {Name:addons-049370 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-049370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0904 20:56:02.341073  389648 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 20:56:02.341116  389648 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 20:56:02.405091  389648 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 20:56:02.405113  389648 crio.go:433] Images already preloaded, skipping extraction
	I0904 20:56:02.405157  389648 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 20:56:02.435602  389648 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 20:56:02.435624  389648 cache_images.go:85] Images are preloaded, skipping loading
	I0904 20:56:02.435633  389648 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0904 20:56:02.435742  389648 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-049370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-049370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0904 20:56:02.435801  389648 ssh_runner.go:195] Run: crio config
	I0904 20:56:02.475208  389648 cni.go:84] Creating CNI manager for ""
	I0904 20:56:02.475229  389648 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 20:56:02.475242  389648 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0904 20:56:02.475263  389648 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-049370 NodeName:addons-049370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0904 20:56:02.475385  389648 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-049370"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0904 20:56:02.475439  389648 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0904 20:56:02.483384  389648 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 20:56:02.483434  389648 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0904 20:56:02.490999  389648 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0904 20:56:02.506097  389648 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 20:56:02.521263  389648 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I0904 20:56:02.536086  389648 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0904 20:56:02.539041  389648 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 20:56:02.548083  389648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 20:56:02.620733  389648 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 20:56:02.632098  389648 certs.go:68] Setting up /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370 for IP: 192.168.49.2
	I0904 20:56:02.632134  389648 certs.go:194] generating shared ca certs ...
	I0904 20:56:02.632155  389648 certs.go:226] acquiring lock for ca certs: {Name:mkab35eaf89c739a644dd45428dbbd1b30c489d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:02.632303  389648 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21490-384635/.minikube/ca.key
	I0904 20:56:02.772055  389648 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21490-384635/.minikube/ca.crt ...
	I0904 20:56:02.772085  389648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/ca.crt: {Name:mk404ac6f8708b208ba3c17564d32d1c6e1f2d80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:02.772267  389648 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21490-384635/.minikube/ca.key ...
	I0904 20:56:02.772279  389648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/ca.key: {Name:mk0f029ece1be42b4490f030d22d0963e0de5ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:02.772354  389648 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21490-384635/.minikube/proxy-client-ca.key
	I0904 20:56:03.010123  389648 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21490-384635/.minikube/proxy-client-ca.crt ...
	I0904 20:56:03.010158  389648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/proxy-client-ca.crt: {Name:mk7836ca5bbc78d58e9f795ae3bd0cc1b3f94116 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:03.010336  389648 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21490-384635/.minikube/proxy-client-ca.key ...
	I0904 20:56:03.010350  389648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/proxy-client-ca.key: {Name:mk4a37f8d0fc0b197f0796089f579493b4ab1519 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:03.010419  389648 certs.go:256] generating profile certs ...
	I0904 20:56:03.010492  389648 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.key
	I0904 20:56:03.010508  389648 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt with IP's: []
	I0904 20:56:03.189084  389648 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt ...
	I0904 20:56:03.189116  389648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt: {Name:mkd7ec52fc00b41923df1429201e9537ed50a6ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:03.189278  389648 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.key ...
	I0904 20:56:03.189288  389648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.key: {Name:mk02506672d1abc668baddf35412038560ece7f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:03.189360  389648 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/apiserver.key.b5f53ae8
	I0904 20:56:03.189379  389648 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/apiserver.crt.b5f53ae8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0904 20:56:03.499646  389648 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/apiserver.crt.b5f53ae8 ...
	I0904 20:56:03.499681  389648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/apiserver.crt.b5f53ae8: {Name:mk8c9ae053706a4ea8f20f5fd17de3c20f5c4e30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:03.499842  389648 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/apiserver.key.b5f53ae8 ...
	I0904 20:56:03.499857  389648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/apiserver.key.b5f53ae8: {Name:mk9c5b0ad197ad61ad1f2b3b99dfc9c995bc0acb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:03.499927  389648 certs.go:381] copying /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/apiserver.crt.b5f53ae8 -> /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/apiserver.crt
	I0904 20:56:03.500017  389648 certs.go:385] copying /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/apiserver.key.b5f53ae8 -> /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/apiserver.key
	I0904 20:56:03.500063  389648 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/proxy-client.key
	I0904 20:56:03.500080  389648 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/proxy-client.crt with IP's: []
	I0904 20:56:04.206716  389648 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/proxy-client.crt ...
	I0904 20:56:04.206749  389648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/proxy-client.crt: {Name:mk2210684251083ae7ccb41ecbd3350906b53776 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:04.206912  389648 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/proxy-client.key ...
	I0904 20:56:04.206925  389648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/proxy-client.key: {Name:mk24ebbc3c1cb4ca4f1f7bb1a93ec6d982e6058d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:04.207093  389648 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca-key.pem (1679 bytes)
	I0904 20:56:04.207128  389648 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem (1078 bytes)
	I0904 20:56:04.207156  389648 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/cert.pem (1123 bytes)
	I0904 20:56:04.207178  389648 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/key.pem (1675 bytes)
	I0904 20:56:04.207825  389648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 20:56:04.229255  389648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0904 20:56:04.249412  389648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 20:56:04.269463  389648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0904 20:56:04.289100  389648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0904 20:56:04.309546  389648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0904 20:56:04.330101  389648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 20:56:04.350231  389648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0904 20:56:04.370529  389648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 20:56:04.390259  389648 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0904 20:56:04.404879  389648 ssh_runner.go:195] Run: openssl version
	I0904 20:56:04.409558  389648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 20:56:04.417330  389648 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 20:56:04.420173  389648 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  4 20:56 /usr/share/ca-certificates/minikubeCA.pem
	I0904 20:56:04.420213  389648 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 20:56:04.426284  389648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 20:56:04.434253  389648 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 20:56:04.437015  389648 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0904 20:56:04.437090  389648 kubeadm.go:392] StartCluster: {Name:addons-049370 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-049370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 20:56:04.437155  389648 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0904 20:56:04.437197  389648 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0904 20:56:04.468884  389648 cri.go:89] found id: ""
	I0904 20:56:04.468950  389648 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0904 20:56:04.476436  389648 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0904 20:56:04.483832  389648 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0904 20:56:04.483872  389648 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0904 20:56:04.491177  389648 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0904 20:56:04.491196  389648 kubeadm.go:157] found existing configuration files:
	
	I0904 20:56:04.491247  389648 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0904 20:56:04.498385  389648 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0904 20:56:04.498431  389648 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0904 20:56:04.505641  389648 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0904 20:56:04.512961  389648 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0904 20:56:04.512996  389648 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0904 20:56:04.519960  389648 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0904 20:56:04.527106  389648 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0904 20:56:04.527145  389648 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0904 20:56:04.534344  389648 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0904 20:56:04.541535  389648 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0904 20:56:04.541584  389648 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0904 20:56:04.548873  389648 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0904 20:56:04.583125  389648 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0904 20:56:04.583201  389648 kubeadm.go:310] [preflight] Running pre-flight checks
	I0904 20:56:04.597681  389648 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0904 20:56:04.597741  389648 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1083-gcp
	I0904 20:56:04.597803  389648 kubeadm.go:310] OS: Linux
	I0904 20:56:04.597915  389648 kubeadm.go:310] CGROUPS_CPU: enabled
	I0904 20:56:04.597990  389648 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0904 20:56:04.598061  389648 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0904 20:56:04.598158  389648 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0904 20:56:04.598223  389648 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0904 20:56:04.598271  389648 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0904 20:56:04.598336  389648 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0904 20:56:04.598406  389648 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0904 20:56:04.598474  389648 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0904 20:56:04.647143  389648 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0904 20:56:04.647322  389648 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0904 20:56:04.647453  389648 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0904 20:56:04.653687  389648 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0904 20:56:04.656516  389648 out.go:252]   - Generating certificates and keys ...
	I0904 20:56:04.656617  389648 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0904 20:56:04.656693  389648 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0904 20:56:04.868159  389648 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0904 20:56:05.089300  389648 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0904 20:56:05.307580  389648 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0904 20:56:05.541675  389648 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0904 20:56:05.660773  389648 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0904 20:56:05.660952  389648 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-049370 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0904 20:56:05.874335  389648 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0904 20:56:05.874525  389648 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-049370 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0904 20:56:06.201674  389648 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0904 20:56:06.395227  389648 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0904 20:56:06.658231  389648 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0904 20:56:06.658358  389648 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0904 20:56:06.844487  389648 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0904 20:56:07.298671  389648 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0904 20:56:07.543710  389648 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0904 20:56:07.923783  389648 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0904 20:56:08.223748  389648 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0904 20:56:08.224259  389648 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0904 20:56:08.226815  389648 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0904 20:56:08.228639  389648 out.go:252]   - Booting up control plane ...
	I0904 20:56:08.228790  389648 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0904 20:56:08.228909  389648 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0904 20:56:08.228988  389648 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0904 20:56:08.237068  389648 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0904 20:56:08.237206  389648 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0904 20:56:08.242388  389648 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0904 20:56:08.242635  389648 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0904 20:56:08.242706  389648 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0904 20:56:08.316793  389648 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0904 20:56:08.316922  389648 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0904 20:56:08.818465  389648 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.780617ms
	I0904 20:56:08.822350  389648 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0904 20:56:08.822466  389648 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0904 20:56:08.822584  389648 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0904 20:56:08.822692  389648 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0904 20:56:10.827725  389648 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.005237267s
	I0904 20:56:11.470833  389648 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.648459396s
	I0904 20:56:13.324669  389648 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.502233446s
	I0904 20:56:13.335088  389648 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0904 20:56:13.344120  389648 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0904 20:56:13.351749  389648 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0904 20:56:13.351978  389648 kubeadm.go:310] [mark-control-plane] Marking the node addons-049370 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0904 20:56:13.359295  389648 kubeadm.go:310] [bootstrap-token] Using token: 2wn3c0.ojgacqfx8o0hgs3z
	I0904 20:56:13.360520  389648 out.go:252]   - Configuring RBAC rules ...
	I0904 20:56:13.360674  389648 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0904 20:56:13.363353  389648 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0904 20:56:13.367752  389648 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0904 20:56:13.369941  389648 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0904 20:56:13.372028  389648 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0904 20:56:13.375032  389648 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0904 20:56:13.729580  389648 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0904 20:56:14.144230  389648 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0904 20:56:14.730781  389648 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0904 20:56:14.731685  389648 kubeadm.go:310] 
	I0904 20:56:14.731789  389648 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0904 20:56:14.731799  389648 kubeadm.go:310] 
	I0904 20:56:14.731900  389648 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0904 20:56:14.731934  389648 kubeadm.go:310] 
	I0904 20:56:14.731997  389648 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0904 20:56:14.732055  389648 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0904 20:56:14.732151  389648 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0904 20:56:14.732161  389648 kubeadm.go:310] 
	I0904 20:56:14.732233  389648 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0904 20:56:14.732242  389648 kubeadm.go:310] 
	I0904 20:56:14.732312  389648 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0904 20:56:14.732321  389648 kubeadm.go:310] 
	I0904 20:56:14.732378  389648 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0904 20:56:14.732445  389648 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0904 20:56:14.732534  389648 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0904 20:56:14.732544  389648 kubeadm.go:310] 
	I0904 20:56:14.732650  389648 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0904 20:56:14.732787  389648 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0904 20:56:14.732801  389648 kubeadm.go:310] 
	I0904 20:56:14.732903  389648 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2wn3c0.ojgacqfx8o0hgs3z \
	I0904 20:56:14.733021  389648 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:27309edecc409b6b2cac280161d86abdcb5cc9f2633f123f165aab1e20ce61c4 \
	I0904 20:56:14.733052  389648 kubeadm.go:310] 	--control-plane 
	I0904 20:56:14.733062  389648 kubeadm.go:310] 
	I0904 20:56:14.733161  389648 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0904 20:56:14.733169  389648 kubeadm.go:310] 
	I0904 20:56:14.733281  389648 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2wn3c0.ojgacqfx8o0hgs3z \
	I0904 20:56:14.733409  389648 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:27309edecc409b6b2cac280161d86abdcb5cc9f2633f123f165aab1e20ce61c4 
	I0904 20:56:14.735269  389648 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0904 20:56:14.735560  389648 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
	I0904 20:56:14.735715  389648 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0904 20:56:14.735757  389648 cni.go:84] Creating CNI manager for ""
	I0904 20:56:14.735771  389648 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 20:56:14.737265  389648 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0904 20:56:14.738354  389648 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0904 20:56:14.741948  389648 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0904 20:56:14.741966  389648 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0904 20:56:14.758407  389648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0904 20:56:14.949539  389648 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0904 20:56:14.949629  389648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:14.949645  389648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-049370 minikube.k8s.io/updated_at=2025_09_04T20_56_14_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=d82f852837f628b3930700b19196c39855cd258a minikube.k8s.io/name=addons-049370 minikube.k8s.io/primary=true
	I0904 20:56:14.957282  389648 ops.go:34] apiserver oom_adj: -16
	I0904 20:56:15.056202  389648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:15.556268  389648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:16.056217  389648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:16.557001  389648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:17.057153  389648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:17.556713  389648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:18.057056  389648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:18.556307  389648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:19.057162  389648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:56:19.120633  389648 kubeadm.go:1105] duration metric: took 4.171070637s to wait for elevateKubeSystemPrivileges
	I0904 20:56:19.120676  389648 kubeadm.go:394] duration metric: took 14.683591745s to StartCluster
	I0904 20:56:19.120715  389648 settings.go:142] acquiring lock: {Name:mke06342cfb6705345a5c7324f763dc44aea4569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:19.120870  389648 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21490-384635/kubeconfig
	I0904 20:56:19.121542  389648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/kubeconfig: {Name:mk6b311573f3fade9cba8f894d5c9f5ca76d1e25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:56:19.121797  389648 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0904 20:56:19.121845  389648 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 20:56:19.121892  389648 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0904 20:56:19.122079  389648 config.go:182] Loaded profile config "addons-049370": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 20:56:19.122530  389648 addons.go:69] Setting inspektor-gadget=true in profile "addons-049370"
	I0904 20:56:19.122543  389648 addons.go:69] Setting yakd=true in profile "addons-049370"
	I0904 20:56:19.122568  389648 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-049370"
	I0904 20:56:19.122574  389648 addons.go:69] Setting registry-creds=true in profile "addons-049370"
	I0904 20:56:19.122584  389648 addons.go:238] Setting addon yakd=true in "addons-049370"
	I0904 20:56:19.122588  389648 addons.go:69] Setting metrics-server=true in profile "addons-049370"
	I0904 20:56:19.122595  389648 addons.go:238] Setting addon registry-creds=true in "addons-049370"
	I0904 20:56:19.122597  389648 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-049370"
	I0904 20:56:19.122606  389648 addons.go:238] Setting addon metrics-server=true in "addons-049370"
	I0904 20:56:19.122631  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.122635  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.122637  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.122574  389648 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-049370"
	I0904 20:56:19.122615  389648 addons.go:69] Setting registry=true in profile "addons-049370"
	I0904 20:56:19.122683  389648 addons.go:69] Setting cloud-spanner=true in profile "addons-049370"
	I0904 20:56:19.122665  389648 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-049370"
	I0904 20:56:19.122703  389648 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-049370"
	I0904 20:56:19.122729  389648 addons.go:238] Setting addon registry=true in "addons-049370"
	I0904 20:56:19.122730  389648 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-049370"
	I0904 20:56:19.122740  389648 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-049370"
	I0904 20:56:19.122757  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.122781  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.123155  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.123184  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.123217  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.122637  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.123265  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.123272  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.123605  389648 addons.go:69] Setting storage-provisioner=true in profile "addons-049370"
	I0904 20:56:19.123629  389648 addons.go:238] Setting addon storage-provisioner=true in "addons-049370"
	I0904 20:56:19.123657  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.123677  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.123784  389648 addons.go:69] Setting volumesnapshots=true in profile "addons-049370"
	I0904 20:56:19.123801  389648 addons.go:238] Setting addon volumesnapshots=true in "addons-049370"
	I0904 20:56:19.123826  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.124143  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.122696  389648 addons.go:238] Setting addon cloud-spanner=true in "addons-049370"
	I0904 20:56:19.124582  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.124795  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.123219  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.125090  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.125204  389648 addons.go:69] Setting gcp-auth=true in profile "addons-049370"
	I0904 20:56:19.126262  389648 mustload.go:65] Loading cluster: addons-049370
	I0904 20:56:19.126543  389648 config.go:182] Loaded profile config "addons-049370": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 20:56:19.126863  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.122674  389648 addons.go:69] Setting default-storageclass=true in profile "addons-049370"
	I0904 20:56:19.130409  389648 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-049370"
	I0904 20:56:19.130766  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.122663  389648 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-049370"
	I0904 20:56:19.132380  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.132897  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.160874  389648 out.go:179] * Verifying Kubernetes components...
	I0904 20:56:19.122562  389648 addons.go:238] Setting addon inspektor-gadget=true in "addons-049370"
	I0904 20:56:19.161079  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.161765  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.163225  389648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 20:56:19.125353  389648 addons.go:69] Setting ingress=true in profile "addons-049370"
	I0904 20:56:19.164437  389648 addons.go:238] Setting addon ingress=true in "addons-049370"
	I0904 20:56:19.164483  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.164897  389648 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0904 20:56:19.166432  389648 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0904 20:56:19.165219  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.167756  389648 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0904 20:56:19.168483  389648 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0904 20:56:19.168508  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0904 20:56:19.168567  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.168620  389648 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0904 20:56:19.168633  389648 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0904 20:56:19.168672  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.170255  389648 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0904 20:56:19.170500  389648 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-049370"
	I0904 20:56:19.170541  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.165299  389648 addons.go:69] Setting volcano=true in profile "addons-049370"
	I0904 20:56:19.170598  389648 addons.go:238] Setting addon volcano=true in "addons-049370"
	I0904 20:56:19.170662  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.125367  389648 addons.go:69] Setting ingress-dns=true in profile "addons-049370"
	I0904 20:56:19.170703  389648 addons.go:238] Setting addon ingress-dns=true in "addons-049370"
	I0904 20:56:19.170745  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.171072  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.171559  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.171696  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.173941  389648 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0904 20:56:19.174145  389648 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0904 20:56:19.175294  389648 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0904 20:56:19.175317  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0904 20:56:19.175370  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.176359  389648 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0904 20:56:19.177473  389648 out.go:179]   - Using image docker.io/registry:3.0.0
	I0904 20:56:19.178644  389648 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0904 20:56:19.179797  389648 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0904 20:56:19.184781  389648 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0904 20:56:19.185493  389648 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0904 20:56:19.185566  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0904 20:56:19.185663  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.193293  389648 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0904 20:56:19.193281  389648 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0904 20:56:19.193365  389648 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0904 20:56:19.193325  389648 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 20:56:19.194473  389648 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0904 20:56:19.194494  389648 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0904 20:56:19.194572  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.195252  389648 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0904 20:56:19.195290  389648 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0904 20:56:19.195358  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.195374  389648 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 20:56:19.195397  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0904 20:56:19.195449  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.195584  389648 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0904 20:56:19.196553  389648 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0904 20:56:19.196568  389648 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0904 20:56:19.196639  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	W0904 20:56:19.205941  389648 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0904 20:56:19.216885  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.234344  389648 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.40
	I0904 20:56:19.234475  389648 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.0
	I0904 20:56:19.236096  389648 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0904 20:56:19.236117  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0904 20:56:19.236181  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.236410  389648 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0904 20:56:19.236424  389648 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0904 20:56:19.236486  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.238985  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.249090  389648 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0904 20:56:19.250463  389648 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0904 20:56:19.250482  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0904 20:56:19.250581  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.251306  389648 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0904 20:56:19.252687  389648 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0904 20:56:19.252707  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0904 20:56:19.252773  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.253251  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.253990  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.275865  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.276415  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.286380  389648 addons.go:238] Setting addon default-storageclass=true in "addons-049370"
	I0904 20:56:19.286427  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:19.286470  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.286751  389648 out.go:179]   - Using image docker.io/busybox:stable
	I0904 20:56:19.286808  389648 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0904 20:56:19.286911  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:19.289833  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.290415  389648 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0904 20:56:19.290520  389648 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0904 20:56:19.290861  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.291698  389648 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0904 20:56:19.291722  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0904 20:56:19.291783  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.294160  389648 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0904 20:56:19.298343  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.298945  389648 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0904 20:56:19.298968  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0904 20:56:19.299026  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.302411  389648 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0904 20:56:19.306360  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.309871  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.312171  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.319635  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.321085  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:19.321297  389648 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0904 20:56:19.321320  389648 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0904 20:56:19.321378  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:19.337083  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	W0904 20:56:19.349311  389648 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0904 20:56:19.349347  389648 retry.go:31] will retry after 269.872023ms: ssh: handshake failed: EOF
	W0904 20:56:19.349375  389648 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0904 20:56:19.349384  389648 retry.go:31] will retry after 359.531202ms: ssh: handshake failed: EOF
	I0904 20:56:19.548037  389648 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 20:56:19.652723  389648 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0904 20:56:19.652769  389648 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0904 20:56:19.663141  389648 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0904 20:56:19.663174  389648 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0904 20:56:19.746376  389648 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0904 20:56:19.746406  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0904 20:56:19.746783  389648 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0904 20:56:19.746802  389648 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0904 20:56:19.751531  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0904 20:56:19.756963  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 20:56:19.767028  389648 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0904 20:56:19.767122  389648 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0904 20:56:19.861846  389648 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0904 20:56:19.861944  389648 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0904 20:56:19.946528  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0904 20:56:19.947053  389648 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:19.947078  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0904 20:56:19.955391  389648 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0904 20:56:19.955469  389648 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0904 20:56:19.959896  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0904 20:56:19.964131  389648 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0904 20:56:19.964187  389648 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0904 20:56:19.966688  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0904 20:56:19.967554  389648 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0904 20:56:19.967597  389648 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0904 20:56:19.969099  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0904 20:56:19.970420  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0904 20:56:20.047283  389648 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0904 20:56:20.047381  389648 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0904 20:56:20.054133  389648 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0904 20:56:20.054222  389648 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0904 20:56:20.255401  389648 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0904 20:56:20.255496  389648 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0904 20:56:20.266289  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:20.268923  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0904 20:56:20.345501  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0904 20:56:20.348902  389648 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0904 20:56:20.348951  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0904 20:56:20.349097  389648 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0904 20:56:20.349114  389648 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0904 20:56:20.448730  389648 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0904 20:56:20.448833  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0904 20:56:20.564135  389648 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0904 20:56:20.564226  389648 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0904 20:56:20.751518  389648 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.44906046s)
	I0904 20:56:20.751627  389648 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0904 20:56:20.751853  389648 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.203672375s)
	I0904 20:56:20.754155  389648 node_ready.go:35] waiting up to 6m0s for node "addons-049370" to be "Ready" ...
	I0904 20:56:20.761736  389648 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0904 20:56:20.761796  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0904 20:56:20.846606  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0904 20:56:20.856051  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0904 20:56:20.866698  389648 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0904 20:56:20.866814  389648 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0904 20:56:21.145379  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0904 20:56:21.350361  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.598747473s)
	I0904 20:56:21.367474  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0904 20:56:21.448274  389648 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0904 20:56:21.448385  389648 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0904 20:56:21.655407  389648 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-049370" context rescaled to 1 replicas
	I0904 20:56:21.846590  389648 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0904 20:56:21.846680  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0904 20:56:22.161088  389648 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0904 20:56:22.161184  389648 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0904 20:56:22.558322  389648 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0904 20:56:22.558416  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0904 20:56:22.757443  389648 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0904 20:56:22.757535  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	W0904 20:56:22.854862  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:23.062691  389648 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0904 20:56:23.062785  389648 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0904 20:56:23.546535  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0904 20:56:23.864177  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.107174607s)
	I0904 20:56:24.150368  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.203792982s)
	I0904 20:56:24.150762  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.190798522s)
	I0904 20:56:24.150841  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.184107525s)
	I0904 20:56:24.150883  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.181727834s)
	I0904 20:56:24.150921  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.180454067s)
	I0904 20:56:24.153577  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.887255095s)
	W0904 20:56:24.153617  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:24.153648  389648 retry.go:31] will retry after 274.263741ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0904 20:56:24.255664  389648 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0904 20:56:24.428253  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:25.145429  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.799802302s)
	I0904 20:56:25.145480  389648 addons.go:479] Verifying addon ingress=true in "addons-049370"
	I0904 20:56:25.145982  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.299284675s)
	I0904 20:56:25.146015  389648 addons.go:479] Verifying addon registry=true in "addons-049370"
	I0904 20:56:25.146076  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.289922439s)
	I0904 20:56:25.146132  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.876927435s)
	I0904 20:56:25.146167  389648 addons.go:479] Verifying addon metrics-server=true in "addons-049370"
	I0904 20:56:25.146241  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.000773212s)
	I0904 20:56:25.147285  389648 out.go:179] * Verifying registry addon...
	I0904 20:56:25.147335  389648 out.go:179] * Verifying ingress addon...
	I0904 20:56:25.148139  389648 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-049370 service yakd-dashboard -n yakd-dashboard
	
	I0904 20:56:25.149773  389648 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0904 20:56:25.149773  389648 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0904 20:56:25.162307  389648 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0904 20:56:25.162382  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:25.162833  389648 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0904 20:56:25.162892  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:25.256839  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:25.653521  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:25.653811  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:26.153386  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:26.153683  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:26.355221  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.987641855s)
	W0904 20:56:26.355277  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0904 20:56:26.355305  389648 retry.go:31] will retry after 260.638152ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0904 20:56:26.355424  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.808790402s)
	I0904 20:56:26.355454  389648 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-049370"
	I0904 20:56:26.356999  389648 out.go:179] * Verifying csi-hostpath-driver addon...
	I0904 20:56:26.359335  389648 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0904 20:56:26.364572  389648 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0904 20:56:26.364592  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:26.415311  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.987009875s)
	W0904 20:56:26.415355  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:26.415375  389648 retry.go:31] will retry after 295.761583ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:26.616984  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0904 20:56:26.653507  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:26.653558  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:26.711551  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:26.849469  389648 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0904 20:56:26.849544  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:26.862656  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:26.874207  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:26.978097  389648 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0904 20:56:26.994974  389648 addons.go:238] Setting addon gcp-auth=true in "addons-049370"
	I0904 20:56:26.995024  389648 host.go:66] Checking if "addons-049370" exists ...
	I0904 20:56:26.995376  389648 cli_runner.go:164] Run: docker container inspect addons-049370 --format={{.State.Status}}
	I0904 20:56:27.012374  389648 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0904 20:56:27.012428  389648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-049370
	I0904 20:56:27.028863  389648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/addons-049370/id_rsa Username:docker}
	I0904 20:56:27.152149  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:27.152264  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:27.362370  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:27.653212  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:27.653402  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:27.758106  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:27.863000  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:28.153378  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:28.153490  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:28.363340  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:28.653066  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:28.653239  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:28.861982  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:29.092107  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.475068234s)
	I0904 20:56:29.092190  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.380598109s)
	I0904 20:56:29.092219  389648 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.079820201s)
	W0904 20:56:29.092237  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:29.092263  389648 retry.go:31] will retry after 502.484223ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:29.093894  389648 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0904 20:56:29.095483  389648 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0904 20:56:29.096510  389648 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0904 20:56:29.096529  389648 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0904 20:56:29.112631  389648 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0904 20:56:29.112663  389648 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0904 20:56:29.128018  389648 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0904 20:56:29.128036  389648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0904 20:56:29.143020  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0904 20:56:29.153882  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:29.154123  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:29.362692  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:29.454170  389648 addons.go:479] Verifying addon gcp-auth=true in "addons-049370"
	I0904 20:56:29.455515  389648 out.go:179] * Verifying gcp-auth addon...
	I0904 20:56:29.457417  389648 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0904 20:56:29.459571  389648 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0904 20:56:29.459590  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:29.595708  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:29.652683  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:29.652827  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:29.862029  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:29.960159  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 20:56:30.114851  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:30.114881  389648 retry.go:31] will retry after 693.179023ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:30.152713  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:30.152863  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:30.257051  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:30.362609  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:30.460179  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:30.652858  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:30.652980  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:30.808239  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:30.863242  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:30.961106  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:31.154171  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:31.154231  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:31.322382  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:31.322416  389648 retry.go:31] will retry after 1.197657s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:31.362659  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:31.459971  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:31.652462  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:31.652562  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:31.862315  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:31.960600  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:32.153504  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:32.153604  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:32.362298  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:32.460616  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:32.520713  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:32.652511  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:32.652595  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:32.760458  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:32.863634  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:32.959731  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 20:56:33.040841  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:33.040881  389648 retry.go:31] will retry after 2.457515415s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:33.152726  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:33.152743  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:33.362502  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:33.460284  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:33.652934  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:33.653038  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:33.862246  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:33.960818  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:34.153166  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:34.153280  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:34.362100  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:34.460789  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:34.653810  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:34.653810  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:34.861972  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:34.960530  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:35.153325  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:35.153406  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:35.257683  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:35.362424  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:35.460858  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:35.499007  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:35.653242  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:35.653299  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:35.861724  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:35.959645  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 20:56:36.016874  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:36.016905  389648 retry.go:31] will retry after 3.533514487s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:36.152675  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:36.152869  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:36.362591  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:36.459815  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:36.652244  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:36.652298  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:36.862251  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:36.960712  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:37.153481  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:37.153520  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:37.362437  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:37.460789  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:37.652357  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:37.652379  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:37.756527  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:37.862037  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:37.960447  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:38.153502  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:38.153539  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:38.362816  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:38.460210  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:38.652903  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:38.653135  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:38.862007  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:38.960650  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:39.153578  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:39.153774  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:39.361972  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:39.460461  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:39.551574  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:39.653495  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:39.653650  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:39.757372  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:39.862832  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:39.960361  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 20:56:40.069853  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:40.069886  389648 retry.go:31] will retry after 3.560952844s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:40.153097  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:40.153206  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:40.363022  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:40.460438  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:40.653028  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:40.653073  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:40.861984  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:40.960713  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:41.153196  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:41.153351  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:41.361826  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:41.460267  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:41.652784  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:41.652802  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:41.862344  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:41.960834  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:42.152737  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:42.152979  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:42.257147  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:42.362587  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:42.459962  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:42.652593  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:42.652591  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:42.862875  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:42.960672  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:43.153594  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:43.153640  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:43.362502  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:43.459930  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:43.631059  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:43.652889  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:43.653087  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:43.863337  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:43.960266  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 20:56:44.144205  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:44.144237  389648 retry.go:31] will retry after 6.676490417s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:44.152882  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:44.152942  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:44.257493  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:44.362019  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:44.460489  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:44.652917  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:44.653070  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:44.863130  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:44.960584  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:45.153391  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:45.153527  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:45.362608  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:45.460071  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:45.652849  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:45.652915  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:45.862777  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:45.960533  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:46.153667  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:46.153804  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:46.362632  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:46.459907  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:46.652296  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:46.652477  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:46.756788  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:46.862351  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:46.960886  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:47.152391  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:47.152568  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:47.362190  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:47.460736  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:47.653232  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:47.653276  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:47.862018  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:47.960474  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:48.153153  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:48.153187  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:48.361882  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:48.460168  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:48.652729  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:48.652864  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:48.757107  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:48.862689  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:48.960180  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:49.152873  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:49.153024  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:49.362233  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:49.460721  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:49.653148  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:49.653303  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:49.861892  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:49.960294  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:50.153077  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:50.153232  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:50.362407  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:50.460915  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:50.652501  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:50.652591  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:50.821192  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:50.862867  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:50.960502  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:51.153049  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:51.153160  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:51.256873  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	W0904 20:56:51.328889  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:51.328930  389648 retry.go:31] will retry after 8.058478981s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:51.362542  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:51.459958  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:51.652490  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:51.652667  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:51.862401  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:51.960987  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:52.152519  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:52.152675  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:52.362366  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:52.460825  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:52.652376  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:52.652430  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:52.862135  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:52.960933  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:53.152709  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:53.152720  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:53.257375  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:53.361785  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:53.460337  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:53.652733  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:53.653014  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:53.862742  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:53.960136  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:54.152726  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:54.152730  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:54.362518  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:54.461080  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:54.652473  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:54.652664  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:54.862347  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:54.961384  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:55.153124  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:55.153270  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:55.257640  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:55.362463  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:55.460990  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:55.652354  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:55.652574  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:55.862388  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:55.960122  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:56.152694  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:56.152920  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:56.361858  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:56.460337  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:56.653103  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:56.653185  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:56.862426  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:56.960988  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:57.152323  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:57.152431  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:57.362264  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:57.460771  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:57.653160  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:57.653308  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:57.756540  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:57.861955  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:57.960493  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:58.153029  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:58.153223  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:58.362583  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:58.460924  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:58.652481  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:58.652538  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:58.862381  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:58.960880  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:59.152567  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:59.152726  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:56:59.362851  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:56:59.387964  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:56:59.460401  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:56:59.652881  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:56:59.653048  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:56:59.757426  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:56:59.862341  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0904 20:56:59.907626  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:59.907661  389648 retry.go:31] will retry after 19.126227015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:56:59.960065  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:00.152732  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:00.152876  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:00.363049  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:00.460514  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:00.653154  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:00.653270  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:00.862296  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:00.961337  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:01.152894  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:01.153019  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:01.362117  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:01.460734  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:01.653271  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:01.653460  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:01.862509  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:01.960047  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:02.152837  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:02.152896  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 20:57:02.257044  389648 node_ready.go:57] node "addons-049370" has "Ready":"False" status (will retry)
	I0904 20:57:02.362872  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:02.460517  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:02.653172  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:02.653366  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:02.862373  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:02.961084  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:03.152784  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:03.152910  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:03.362694  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:03.459964  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:03.652371  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:03.652557  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:03.757648  389648 node_ready.go:49] node "addons-049370" is "Ready"
	I0904 20:57:03.757687  389648 node_ready.go:38] duration metric: took 43.003447045s for node "addons-049370" to be "Ready" ...
	I0904 20:57:03.757707  389648 api_server.go:52] waiting for apiserver process to appear ...
	I0904 20:57:03.757770  389648 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 20:57:03.775055  389648 api_server.go:72] duration metric: took 44.653167184s to wait for apiserver process to appear ...
	I0904 20:57:03.775146  389648 api_server.go:88] waiting for apiserver healthz status ...
	I0904 20:57:03.775175  389648 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 20:57:03.847773  389648 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0904 20:57:03.848894  389648 api_server.go:141] control plane version: v1.34.0
	I0904 20:57:03.848928  389648 api_server.go:131] duration metric: took 73.768685ms to wait for apiserver health ...
	I0904 20:57:03.848941  389648 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 20:57:03.853285  389648 system_pods.go:59] 20 kube-system pods found
	I0904 20:57:03.853319  389648 system_pods.go:61] "amd-gpu-device-plugin-ddj6n" [dd553d62-6014-4d9a-b40a-e960cd942c31] Pending
	I0904 20:57:03.853326  389648 system_pods.go:61] "coredns-66bc5c9577-m8z9t" [461c8bca-3775-4a4e-a6ea-46896415415c] Pending
	I0904 20:57:03.853331  389648 system_pods.go:61] "csi-hostpath-attacher-0" [a2237a13-ee99-438b-8cf6-3c88c95e7d6b] Pending
	I0904 20:57:03.853336  389648 system_pods.go:61] "csi-hostpath-resizer-0" [42e75435-91d0-4b2b-a7f7-5dc7aed8258e] Pending
	I0904 20:57:03.853341  389648 system_pods.go:61] "csi-hostpathplugin-98s7l" [0f04c863-d08f-4ce8-8aba-f8431c710bad] Pending
	I0904 20:57:03.853346  389648 system_pods.go:61] "etcd-addons-049370" [a5b5bef6-6ee7-457c-8ed6-7fcd25717290] Running
	I0904 20:57:03.853352  389648 system_pods.go:61] "kindnet-7bfb9" [255cca7e-1239-404f-a063-333e80f7b32c] Running
	I0904 20:57:03.853358  389648 system_pods.go:61] "kube-apiserver-addons-049370" [1bf36e27-c0a6-4950-9554-56e2abea00d9] Running
	I0904 20:57:03.853366  389648 system_pods.go:61] "kube-controller-manager-addons-049370" [10f65f3d-208b-4468-88b3-7bd1f31a99ef] Running
	I0904 20:57:03.853372  389648 system_pods.go:61] "kube-ingress-dns-minikube" [1f14a514-2ff6-4572-97f7-91a8ef6a1e64] Pending
	I0904 20:57:03.853380  389648 system_pods.go:61] "kube-proxy-k5lnm" [f9894455-f642-473d-95bc-6d2aeccae2cf] Running
	I0904 20:57:03.853389  389648 system_pods.go:61] "kube-scheduler-addons-049370" [a95d0b5b-fd44-47f5-bf4d-55c2038da3de] Running
	I0904 20:57:03.853403  389648 system_pods.go:61] "metrics-server-85b7d694d7-kgh4z" [85361692-2630-41a0-bd24-f432d8ed4a23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 20:57:03.853412  389648 system_pods.go:61] "nvidia-device-plugin-daemonset-kqhfs" [d73bdd1f-d0db-435b-89c5-fc48b0ac0590] Pending
	I0904 20:57:03.853423  389648 system_pods.go:61] "registry-66898fdd98-2lbsx" [7bb317fd-9e98-41e6-a9bd-15826d701411] Pending
	I0904 20:57:03.853431  389648 system_pods.go:61] "registry-creds-764b6fb674-zkcq2" [ba1e4f2c-5cae-4b0d-be29-bb07606db48c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0904 20:57:03.853439  389648 system_pods.go:61] "registry-proxy-84zdv" [83f3d976-f7e6-44f0-a75e-408cd4584cfd] Pending
	I0904 20:57:03.853445  389648 system_pods.go:61] "snapshot-controller-7d9fbc56b8-5d9jh" [c0db4104-0944-4863-b900-e7db691bf3f3] Pending
	I0904 20:57:03.853455  389648 system_pods.go:61] "snapshot-controller-7d9fbc56b8-mgxvk" [afd9e3b4-4dc1-41fb-ae65-96ec88d7c7b8] Pending
	I0904 20:57:03.853460  389648 system_pods.go:61] "storage-provisioner" [8fd9befb-6934-4cdc-bb01-07e92f780dfa] Pending
	I0904 20:57:03.853471  389648 system_pods.go:74] duration metric: took 4.521878ms to wait for pod list to return data ...
	I0904 20:57:03.853485  389648 default_sa.go:34] waiting for default service account to be created ...
	I0904 20:57:03.855589  389648 default_sa.go:45] found service account: "default"
	I0904 20:57:03.855645  389648 default_sa.go:55] duration metric: took 2.148457ms for default service account to be created ...
	I0904 20:57:03.855669  389648 system_pods.go:116] waiting for k8s-apps to be running ...
	I0904 20:57:03.864140  389648 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0904 20:57:03.864166  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:03.865511  389648 system_pods.go:86] 20 kube-system pods found
	I0904 20:57:03.865543  389648 system_pods.go:89] "amd-gpu-device-plugin-ddj6n" [dd553d62-6014-4d9a-b40a-e960cd942c31] Pending
	I0904 20:57:03.865552  389648 system_pods.go:89] "coredns-66bc5c9577-m8z9t" [461c8bca-3775-4a4e-a6ea-46896415415c] Pending
	I0904 20:57:03.865558  389648 system_pods.go:89] "csi-hostpath-attacher-0" [a2237a13-ee99-438b-8cf6-3c88c95e7d6b] Pending
	I0904 20:57:03.865563  389648 system_pods.go:89] "csi-hostpath-resizer-0" [42e75435-91d0-4b2b-a7f7-5dc7aed8258e] Pending
	I0904 20:57:03.865568  389648 system_pods.go:89] "csi-hostpathplugin-98s7l" [0f04c863-d08f-4ce8-8aba-f8431c710bad] Pending
	I0904 20:57:03.865574  389648 system_pods.go:89] "etcd-addons-049370" [a5b5bef6-6ee7-457c-8ed6-7fcd25717290] Running
	I0904 20:57:03.865580  389648 system_pods.go:89] "kindnet-7bfb9" [255cca7e-1239-404f-a063-333e80f7b32c] Running
	I0904 20:57:03.865586  389648 system_pods.go:89] "kube-apiserver-addons-049370" [1bf36e27-c0a6-4950-9554-56e2abea00d9] Running
	I0904 20:57:03.865591  389648 system_pods.go:89] "kube-controller-manager-addons-049370" [10f65f3d-208b-4468-88b3-7bd1f31a99ef] Running
	I0904 20:57:03.865595  389648 system_pods.go:89] "kube-ingress-dns-minikube" [1f14a514-2ff6-4572-97f7-91a8ef6a1e64] Pending
	I0904 20:57:03.865599  389648 system_pods.go:89] "kube-proxy-k5lnm" [f9894455-f642-473d-95bc-6d2aeccae2cf] Running
	I0904 20:57:03.865602  389648 system_pods.go:89] "kube-scheduler-addons-049370" [a95d0b5b-fd44-47f5-bf4d-55c2038da3de] Running
	I0904 20:57:03.865611  389648 system_pods.go:89] "metrics-server-85b7d694d7-kgh4z" [85361692-2630-41a0-bd24-f432d8ed4a23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 20:57:03.865621  389648 system_pods.go:89] "nvidia-device-plugin-daemonset-kqhfs" [d73bdd1f-d0db-435b-89c5-fc48b0ac0590] Pending
	I0904 20:57:03.865627  389648 system_pods.go:89] "registry-66898fdd98-2lbsx" [7bb317fd-9e98-41e6-a9bd-15826d701411] Pending
	I0904 20:57:03.865631  389648 system_pods.go:89] "registry-creds-764b6fb674-zkcq2" [ba1e4f2c-5cae-4b0d-be29-bb07606db48c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0904 20:57:03.865635  389648 system_pods.go:89] "registry-proxy-84zdv" [83f3d976-f7e6-44f0-a75e-408cd4584cfd] Pending
	I0904 20:57:03.865639  389648 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5d9jh" [c0db4104-0944-4863-b900-e7db691bf3f3] Pending
	I0904 20:57:03.865645  389648 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mgxvk" [afd9e3b4-4dc1-41fb-ae65-96ec88d7c7b8] Pending
	I0904 20:57:03.865650  389648 system_pods.go:89] "storage-provisioner" [8fd9befb-6934-4cdc-bb01-07e92f780dfa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 20:57:03.865666  389648 retry.go:31] will retry after 266.681541ms: missing components: kube-dns
	I0904 20:57:03.963849  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:04.148992  389648 system_pods.go:86] 20 kube-system pods found
	I0904 20:57:04.149036  389648 system_pods.go:89] "amd-gpu-device-plugin-ddj6n" [dd553d62-6014-4d9a-b40a-e960cd942c31] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0904 20:57:04.149049  389648 system_pods.go:89] "coredns-66bc5c9577-m8z9t" [461c8bca-3775-4a4e-a6ea-46896415415c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 20:57:04.149060  389648 system_pods.go:89] "csi-hostpath-attacher-0" [a2237a13-ee99-438b-8cf6-3c88c95e7d6b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0904 20:57:04.149065  389648 system_pods.go:89] "csi-hostpath-resizer-0" [42e75435-91d0-4b2b-a7f7-5dc7aed8258e] Pending
	I0904 20:57:04.149077  389648 system_pods.go:89] "csi-hostpathplugin-98s7l" [0f04c863-d08f-4ce8-8aba-f8431c710bad] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0904 20:57:04.149083  389648 system_pods.go:89] "etcd-addons-049370" [a5b5bef6-6ee7-457c-8ed6-7fcd25717290] Running
	I0904 20:57:04.149090  389648 system_pods.go:89] "kindnet-7bfb9" [255cca7e-1239-404f-a063-333e80f7b32c] Running
	I0904 20:57:04.149095  389648 system_pods.go:89] "kube-apiserver-addons-049370" [1bf36e27-c0a6-4950-9554-56e2abea00d9] Running
	I0904 20:57:04.149101  389648 system_pods.go:89] "kube-controller-manager-addons-049370" [10f65f3d-208b-4468-88b3-7bd1f31a99ef] Running
	I0904 20:57:04.149158  389648 system_pods.go:89] "kube-ingress-dns-minikube" [1f14a514-2ff6-4572-97f7-91a8ef6a1e64] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0904 20:57:04.149164  389648 system_pods.go:89] "kube-proxy-k5lnm" [f9894455-f642-473d-95bc-6d2aeccae2cf] Running
	I0904 20:57:04.149171  389648 system_pods.go:89] "kube-scheduler-addons-049370" [a95d0b5b-fd44-47f5-bf4d-55c2038da3de] Running
	I0904 20:57:04.149179  389648 system_pods.go:89] "metrics-server-85b7d694d7-kgh4z" [85361692-2630-41a0-bd24-f432d8ed4a23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 20:57:04.149188  389648 system_pods.go:89] "nvidia-device-plugin-daemonset-kqhfs" [d73bdd1f-d0db-435b-89c5-fc48b0ac0590] Pending
	I0904 20:57:04.149196  389648 system_pods.go:89] "registry-66898fdd98-2lbsx" [7bb317fd-9e98-41e6-a9bd-15826d701411] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0904 20:57:04.149207  389648 system_pods.go:89] "registry-creds-764b6fb674-zkcq2" [ba1e4f2c-5cae-4b0d-be29-bb07606db48c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0904 20:57:04.149216  389648 system_pods.go:89] "registry-proxy-84zdv" [83f3d976-f7e6-44f0-a75e-408cd4584cfd] Pending
	I0904 20:57:04.149226  389648 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5d9jh" [c0db4104-0944-4863-b900-e7db691bf3f3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 20:57:04.149236  389648 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mgxvk" [afd9e3b4-4dc1-41fb-ae65-96ec88d7c7b8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 20:57:04.149249  389648 system_pods.go:89] "storage-provisioner" [8fd9befb-6934-4cdc-bb01-07e92f780dfa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 20:57:04.149269  389648 retry.go:31] will retry after 384.617911ms: missing components: kube-dns
	I0904 20:57:04.154716  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:04.154839  389648 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0904 20:57:04.154853  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:04.366569  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:04.466268  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:04.567997  389648 system_pods.go:86] 20 kube-system pods found
	I0904 20:57:04.568030  389648 system_pods.go:89] "amd-gpu-device-plugin-ddj6n" [dd553d62-6014-4d9a-b40a-e960cd942c31] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0904 20:57:04.568038  389648 system_pods.go:89] "coredns-66bc5c9577-m8z9t" [461c8bca-3775-4a4e-a6ea-46896415415c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 20:57:04.568045  389648 system_pods.go:89] "csi-hostpath-attacher-0" [a2237a13-ee99-438b-8cf6-3c88c95e7d6b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0904 20:57:04.568050  389648 system_pods.go:89] "csi-hostpath-resizer-0" [42e75435-91d0-4b2b-a7f7-5dc7aed8258e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0904 20:57:04.568057  389648 system_pods.go:89] "csi-hostpathplugin-98s7l" [0f04c863-d08f-4ce8-8aba-f8431c710bad] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0904 20:57:04.568063  389648 system_pods.go:89] "etcd-addons-049370" [a5b5bef6-6ee7-457c-8ed6-7fcd25717290] Running
	I0904 20:57:04.568067  389648 system_pods.go:89] "kindnet-7bfb9" [255cca7e-1239-404f-a063-333e80f7b32c] Running
	I0904 20:57:04.568071  389648 system_pods.go:89] "kube-apiserver-addons-049370" [1bf36e27-c0a6-4950-9554-56e2abea00d9] Running
	I0904 20:57:04.568074  389648 system_pods.go:89] "kube-controller-manager-addons-049370" [10f65f3d-208b-4468-88b3-7bd1f31a99ef] Running
	I0904 20:57:04.568081  389648 system_pods.go:89] "kube-ingress-dns-minikube" [1f14a514-2ff6-4572-97f7-91a8ef6a1e64] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0904 20:57:04.568086  389648 system_pods.go:89] "kube-proxy-k5lnm" [f9894455-f642-473d-95bc-6d2aeccae2cf] Running
	I0904 20:57:04.568091  389648 system_pods.go:89] "kube-scheduler-addons-049370" [a95d0b5b-fd44-47f5-bf4d-55c2038da3de] Running
	I0904 20:57:04.568096  389648 system_pods.go:89] "metrics-server-85b7d694d7-kgh4z" [85361692-2630-41a0-bd24-f432d8ed4a23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 20:57:04.568110  389648 system_pods.go:89] "nvidia-device-plugin-daemonset-kqhfs" [d73bdd1f-d0db-435b-89c5-fc48b0ac0590] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0904 20:57:04.568115  389648 system_pods.go:89] "registry-66898fdd98-2lbsx" [7bb317fd-9e98-41e6-a9bd-15826d701411] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0904 20:57:04.568122  389648 system_pods.go:89] "registry-creds-764b6fb674-zkcq2" [ba1e4f2c-5cae-4b0d-be29-bb07606db48c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0904 20:57:04.568127  389648 system_pods.go:89] "registry-proxy-84zdv" [83f3d976-f7e6-44f0-a75e-408cd4584cfd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0904 20:57:04.568135  389648 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5d9jh" [c0db4104-0944-4863-b900-e7db691bf3f3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 20:57:04.568140  389648 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mgxvk" [afd9e3b4-4dc1-41fb-ae65-96ec88d7c7b8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 20:57:04.568147  389648 system_pods.go:89] "storage-provisioner" [8fd9befb-6934-4cdc-bb01-07e92f780dfa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 20:57:04.568163  389648 retry.go:31] will retry after 481.666443ms: missing components: kube-dns
	I0904 20:57:04.667086  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:04.667538  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:04.862644  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:04.959928  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:05.053770  389648 system_pods.go:86] 20 kube-system pods found
	I0904 20:57:05.053813  389648 system_pods.go:89] "amd-gpu-device-plugin-ddj6n" [dd553d62-6014-4d9a-b40a-e960cd942c31] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0904 20:57:05.053821  389648 system_pods.go:89] "coredns-66bc5c9577-m8z9t" [461c8bca-3775-4a4e-a6ea-46896415415c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 20:57:05.053829  389648 system_pods.go:89] "csi-hostpath-attacher-0" [a2237a13-ee99-438b-8cf6-3c88c95e7d6b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0904 20:57:05.053834  389648 system_pods.go:89] "csi-hostpath-resizer-0" [42e75435-91d0-4b2b-a7f7-5dc7aed8258e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0904 20:57:05.053840  389648 system_pods.go:89] "csi-hostpathplugin-98s7l" [0f04c863-d08f-4ce8-8aba-f8431c710bad] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0904 20:57:05.053846  389648 system_pods.go:89] "etcd-addons-049370" [a5b5bef6-6ee7-457c-8ed6-7fcd25717290] Running
	I0904 20:57:05.053850  389648 system_pods.go:89] "kindnet-7bfb9" [255cca7e-1239-404f-a063-333e80f7b32c] Running
	I0904 20:57:05.053854  389648 system_pods.go:89] "kube-apiserver-addons-049370" [1bf36e27-c0a6-4950-9554-56e2abea00d9] Running
	I0904 20:57:05.053858  389648 system_pods.go:89] "kube-controller-manager-addons-049370" [10f65f3d-208b-4468-88b3-7bd1f31a99ef] Running
	I0904 20:57:05.053863  389648 system_pods.go:89] "kube-ingress-dns-minikube" [1f14a514-2ff6-4572-97f7-91a8ef6a1e64] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0904 20:57:05.053871  389648 system_pods.go:89] "kube-proxy-k5lnm" [f9894455-f642-473d-95bc-6d2aeccae2cf] Running
	I0904 20:57:05.053875  389648 system_pods.go:89] "kube-scheduler-addons-049370" [a95d0b5b-fd44-47f5-bf4d-55c2038da3de] Running
	I0904 20:57:05.053880  389648 system_pods.go:89] "metrics-server-85b7d694d7-kgh4z" [85361692-2630-41a0-bd24-f432d8ed4a23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 20:57:05.053887  389648 system_pods.go:89] "nvidia-device-plugin-daemonset-kqhfs" [d73bdd1f-d0db-435b-89c5-fc48b0ac0590] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0904 20:57:05.053893  389648 system_pods.go:89] "registry-66898fdd98-2lbsx" [7bb317fd-9e98-41e6-a9bd-15826d701411] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0904 20:57:05.053900  389648 system_pods.go:89] "registry-creds-764b6fb674-zkcq2" [ba1e4f2c-5cae-4b0d-be29-bb07606db48c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0904 20:57:05.053905  389648 system_pods.go:89] "registry-proxy-84zdv" [83f3d976-f7e6-44f0-a75e-408cd4584cfd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0904 20:57:05.053912  389648 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5d9jh" [c0db4104-0944-4863-b900-e7db691bf3f3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 20:57:05.053918  389648 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mgxvk" [afd9e3b4-4dc1-41fb-ae65-96ec88d7c7b8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 20:57:05.053924  389648 system_pods.go:89] "storage-provisioner" [8fd9befb-6934-4cdc-bb01-07e92f780dfa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 20:57:05.053939  389648 retry.go:31] will retry after 484.806352ms: missing components: kube-dns
	I0904 20:57:05.153022  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:05.153142  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:05.363067  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:05.460377  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:05.543458  389648 system_pods.go:86] 20 kube-system pods found
	I0904 20:57:05.543495  389648 system_pods.go:89] "amd-gpu-device-plugin-ddj6n" [dd553d62-6014-4d9a-b40a-e960cd942c31] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0904 20:57:05.543501  389648 system_pods.go:89] "coredns-66bc5c9577-m8z9t" [461c8bca-3775-4a4e-a6ea-46896415415c] Running
	I0904 20:57:05.543508  389648 system_pods.go:89] "csi-hostpath-attacher-0" [a2237a13-ee99-438b-8cf6-3c88c95e7d6b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0904 20:57:05.543514  389648 system_pods.go:89] "csi-hostpath-resizer-0" [42e75435-91d0-4b2b-a7f7-5dc7aed8258e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0904 20:57:05.543520  389648 system_pods.go:89] "csi-hostpathplugin-98s7l" [0f04c863-d08f-4ce8-8aba-f8431c710bad] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0904 20:57:05.543525  389648 system_pods.go:89] "etcd-addons-049370" [a5b5bef6-6ee7-457c-8ed6-7fcd25717290] Running
	I0904 20:57:05.543530  389648 system_pods.go:89] "kindnet-7bfb9" [255cca7e-1239-404f-a063-333e80f7b32c] Running
	I0904 20:57:05.543542  389648 system_pods.go:89] "kube-apiserver-addons-049370" [1bf36e27-c0a6-4950-9554-56e2abea00d9] Running
	I0904 20:57:05.543552  389648 system_pods.go:89] "kube-controller-manager-addons-049370" [10f65f3d-208b-4468-88b3-7bd1f31a99ef] Running
	I0904 20:57:05.543557  389648 system_pods.go:89] "kube-ingress-dns-minikube" [1f14a514-2ff6-4572-97f7-91a8ef6a1e64] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0904 20:57:05.543563  389648 system_pods.go:89] "kube-proxy-k5lnm" [f9894455-f642-473d-95bc-6d2aeccae2cf] Running
	I0904 20:57:05.543567  389648 system_pods.go:89] "kube-scheduler-addons-049370" [a95d0b5b-fd44-47f5-bf4d-55c2038da3de] Running
	I0904 20:57:05.543571  389648 system_pods.go:89] "metrics-server-85b7d694d7-kgh4z" [85361692-2630-41a0-bd24-f432d8ed4a23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 20:57:05.543579  389648 system_pods.go:89] "nvidia-device-plugin-daemonset-kqhfs" [d73bdd1f-d0db-435b-89c5-fc48b0ac0590] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0904 20:57:05.543585  389648 system_pods.go:89] "registry-66898fdd98-2lbsx" [7bb317fd-9e98-41e6-a9bd-15826d701411] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0904 20:57:05.543593  389648 system_pods.go:89] "registry-creds-764b6fb674-zkcq2" [ba1e4f2c-5cae-4b0d-be29-bb07606db48c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0904 20:57:05.543598  389648 system_pods.go:89] "registry-proxy-84zdv" [83f3d976-f7e6-44f0-a75e-408cd4584cfd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0904 20:57:05.543605  389648 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5d9jh" [c0db4104-0944-4863-b900-e7db691bf3f3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 20:57:05.543612  389648 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mgxvk" [afd9e3b4-4dc1-41fb-ae65-96ec88d7c7b8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 20:57:05.543618  389648 system_pods.go:89] "storage-provisioner" [8fd9befb-6934-4cdc-bb01-07e92f780dfa] Running
	I0904 20:57:05.543626  389648 system_pods.go:126] duration metric: took 1.687941335s to wait for k8s-apps to be running ...
	I0904 20:57:05.543650  389648 system_svc.go:44] waiting for kubelet service to be running ....
	I0904 20:57:05.543694  389648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 20:57:05.555385  389648 system_svc.go:56] duration metric: took 11.725653ms WaitForService to wait for kubelet
	I0904 20:57:05.555412  389648 kubeadm.go:578] duration metric: took 46.433531844s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 20:57:05.555439  389648 node_conditions.go:102] verifying NodePressure condition ...
	I0904 20:57:05.558136  389648 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0904 20:57:05.558169  389648 node_conditions.go:123] node cpu capacity is 8
	I0904 20:57:05.558187  389648 node_conditions.go:105] duration metric: took 2.741859ms to run NodePressure ...
	I0904 20:57:05.558203  389648 start.go:241] waiting for startup goroutines ...
	I0904 20:57:05.653335  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:05.653493  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:05.862594  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:05.960405  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:06.155853  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:06.155860  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:06.363166  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:06.460689  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:06.653352  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:06.653395  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:06.862486  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:06.960974  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:07.152583  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:07.152693  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:07.362526  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:07.461234  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:07.653353  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:07.653430  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:07.862588  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:07.961373  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:08.153869  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:08.153919  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:08.363098  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:08.460845  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:08.652618  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:08.652818  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:08.863708  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:08.961239  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:09.153619  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:09.153661  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:09.363027  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:09.461172  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:09.653178  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:09.653259  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:09.862455  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:09.961183  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:10.153505  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:10.153868  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:10.362513  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:10.460913  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:10.653892  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:10.654021  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:10.863179  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:10.961003  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:11.152924  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:11.152937  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:11.363254  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:11.460435  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:11.653707  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:11.653749  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:11.862653  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:11.960670  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:12.153474  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:12.153582  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:12.362607  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:12.460401  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:12.653547  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:12.653621  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:12.863488  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:12.961428  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:13.153780  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:13.153926  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:13.363601  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:13.463509  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:13.653590  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:13.653721  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:13.863091  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:13.960747  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:14.156722  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:14.156892  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:14.363724  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:14.460915  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:14.652850  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:14.652930  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:14.863379  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:14.960898  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:15.153105  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:15.153190  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:15.364645  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:15.466746  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:15.653529  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:15.653552  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:15.863473  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:15.961399  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:16.153418  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:16.153633  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:16.365659  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:16.460427  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:16.655316  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:16.656314  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:16.863846  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:16.960170  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:17.153040  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:17.153440  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:17.362488  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:17.461324  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:17.653058  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:17.653099  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:17.862919  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:17.960632  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:18.153699  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:18.153804  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:18.362710  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:18.460244  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:18.653100  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:18.653412  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:18.862825  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:18.963826  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:19.034934  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:57:19.152876  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:19.153003  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:19.363216  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:19.461101  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:19.654705  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:19.654966  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:19.862758  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:19.960238  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 20:57:19.965214  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:57:19.965249  389648 retry.go:31] will retry after 20.693378838s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:57:20.153317  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:20.153424  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:20.362498  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:20.461668  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:20.653715  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:20.653849  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:20.862660  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:20.960422  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:21.153279  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:21.153367  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:21.362521  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:21.461453  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:21.653611  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:21.653616  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:21.862958  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:21.960988  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:22.152881  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:22.152896  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:22.362933  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:22.460865  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:22.652773  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:22.652825  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:22.862669  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:22.960462  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:23.153822  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:23.154026  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:23.362981  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:23.460282  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:23.653482  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:23.653565  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:23.862339  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:23.960741  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:24.153397  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:24.153562  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:24.362213  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:24.460604  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:24.653463  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:24.653585  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:24.862661  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:24.960921  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:25.152671  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:25.152676  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:25.362282  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:25.460981  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:25.652991  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:25.653126  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:25.863187  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:25.960971  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:26.155115  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:26.155549  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:26.364641  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:26.460565  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:26.653351  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:26.653460  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:26.862335  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:26.961215  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:27.153245  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:27.153382  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:27.362420  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:27.460886  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:27.652946  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:27.653004  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:27.862794  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:27.960433  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:28.153554  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:28.153563  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:28.362061  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:28.460951  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:28.653077  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:28.653166  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:28.862812  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:28.960910  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:29.152712  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:29.152713  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:29.362969  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:29.460457  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:29.653716  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:29.653816  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:29.862674  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:29.960527  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:30.153309  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:30.153467  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:30.364159  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:30.465320  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:30.653741  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:30.653775  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:30.862640  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:30.963437  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:31.153238  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:31.153259  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:31.362036  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:31.460565  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:31.653248  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:31.653298  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:31.863300  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:31.960651  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:32.153238  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:32.153326  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:32.362194  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:32.460483  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:32.653633  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:32.653670  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:32.862856  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:32.960920  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:33.163353  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:33.163571  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:33.363807  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:33.463275  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:33.661398  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:33.661866  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:34.067754  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:34.158198  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:34.252681  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:34.267829  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:34.462230  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:34.462566  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:34.655260  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:34.655318  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:34.862629  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:34.960553  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:35.153838  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:35.153871  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:35.363148  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:35.461050  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:35.653525  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:35.653658  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:35.864175  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:35.961508  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:36.154202  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:36.154257  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:36.363162  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:36.460840  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:36.653022  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:36.653219  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:36.863704  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:36.960613  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:37.153938  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:37.153958  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:37.363084  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:37.461050  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:37.652708  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:37.652726  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:37.862959  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:37.960607  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:38.153906  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:38.154265  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:38.363779  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:38.460618  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:38.653662  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:38.653739  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:38.862850  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:38.960535  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:39.153828  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:39.153870  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:39.363192  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:39.461549  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:39.653371  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:39.653594  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:39.862436  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:39.961060  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:40.153255  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:40.153265  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:40.362463  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:40.461112  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:40.653195  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:40.653238  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:40.659168  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:57:40.863485  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:40.961390  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:41.153507  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:41.153683  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:41.363294  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:41.460511  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 20:57:41.586847  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:57:41.586876  389648 retry.go:31] will retry after 18.584233469s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 20:57:41.653116  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:41.653297  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:41.864041  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:41.960341  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:42.153090  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:42.153093  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:42.363050  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:42.460434  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:42.653587  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:42.653634  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:42.862872  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:42.960883  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:43.153266  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:43.153570  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:43.362999  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:43.460713  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:43.653498  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:43.653565  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:43.862351  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:43.960779  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:44.152645  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:44.152744  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:44.362647  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:44.460216  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:44.653789  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:44.654025  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:44.863259  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:44.961105  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:45.153229  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:45.153267  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:45.363497  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:45.461501  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:45.653400  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:45.653589  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:45.862262  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:45.960864  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:46.152860  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:46.152890  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:46.363058  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:46.460848  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:46.653051  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:46.653077  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:46.863163  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:46.960859  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:47.153234  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:47.153238  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:47.363116  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:47.460543  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:47.653774  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:47.653836  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:47.863023  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:47.961011  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:48.153044  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:48.153183  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:48.363514  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:48.461320  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:48.653777  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:48.653858  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:48.862550  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:48.961142  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:49.153028  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:57:49.153220  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:49.362652  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:49.459891  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:49.653164  389648 kapi.go:107] duration metric: took 1m24.503386944s to wait for kubernetes.io/minikube-addons=registry ...
	I0904 20:57:49.653212  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:49.862954  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:49.960303  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:50.153422  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:50.362439  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:50.460798  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:50.653419  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:50.862686  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:50.960970  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:51.154179  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:51.363166  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:51.460875  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:51.652647  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:51.863070  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:51.960526  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:52.153813  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:52.362711  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:52.460087  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:52.653154  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:52.863206  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:52.960823  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:53.153125  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:53.363443  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:53.461004  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:53.656643  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:53.866801  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:53.961469  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:54.153974  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:54.364415  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:54.461643  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:54.655730  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:54.867016  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:54.961177  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:55.155271  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:55.363462  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:55.461909  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:55.654080  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:55.862506  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:55.962401  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:56.153639  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:56.363134  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:56.460790  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:56.653986  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:56.862951  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:56.959890  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:57.152935  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:57.363141  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:57.460860  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:57.653029  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:57.863171  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:57.961135  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:58.153239  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:58.363391  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:58.460583  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:58.654112  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:58.863905  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:58.960604  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:59.153765  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:59.363398  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:59.460827  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:57:59.653240  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:57:59.863414  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:57:59.960740  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:00.154243  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:58:00.172145  389648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 20:58:00.363535  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:00.460166  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:00.653062  389648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:58:00.863155  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:00.960597  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:01.153145  389648 kapi.go:107] duration metric: took 1m36.0033494s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0904 20:58:01.362175  389648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.189987346s)
	W0904 20:58:01.362237  389648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0904 20:58:01.362358  389648 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0904 20:58:01.377323  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:01.461301  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:01.862664  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:01.960172  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:02.362264  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:02.460782  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:02.863228  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:02.960690  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:03.362947  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:03.461061  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:58:03.863740  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:03.960182  389648 kapi.go:107] duration metric: took 1m34.502765752s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0904 20:58:03.962033  389648 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-049370 cluster.
	I0904 20:58:03.963517  389648 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0904 20:58:03.964745  389648 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0904 20:58:04.362552  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:04.863544  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:05.363523  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:05.862668  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:06.363450  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:06.862835  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:07.363579  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:07.862482  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:08.362742  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:08.863840  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:09.365433  389648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:58:09.862609  389648 kapi.go:107] duration metric: took 1m43.503273609s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0904 20:58:09.864811  389648 out.go:179] * Enabled addons: amd-gpu-device-plugin, storage-provisioner, ingress-dns, registry-creds, nvidia-device-plugin, default-storageclass, cloud-spanner, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0904 20:58:09.865999  389648 addons.go:514] duration metric: took 1m50.744105832s for enable addons: enabled=[amd-gpu-device-plugin storage-provisioner ingress-dns registry-creds nvidia-device-plugin default-storageclass cloud-spanner metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0904 20:58:09.866049  389648 start.go:246] waiting for cluster config update ...
	I0904 20:58:09.866079  389648 start.go:255] writing updated cluster config ...
	I0904 20:58:09.866376  389648 ssh_runner.go:195] Run: rm -f paused
	I0904 20:58:09.869857  389648 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 20:58:09.872605  389648 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-m8z9t" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:09.876507  389648 pod_ready.go:94] pod "coredns-66bc5c9577-m8z9t" is "Ready"
	I0904 20:58:09.876529  389648 pod_ready.go:86] duration metric: took 3.904383ms for pod "coredns-66bc5c9577-m8z9t" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:09.878366  389648 pod_ready.go:83] waiting for pod "etcd-addons-049370" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:09.881658  389648 pod_ready.go:94] pod "etcd-addons-049370" is "Ready"
	I0904 20:58:09.881678  389648 pod_ready.go:86] duration metric: took 3.291911ms for pod "etcd-addons-049370" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:09.883326  389648 pod_ready.go:83] waiting for pod "kube-apiserver-addons-049370" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:09.886438  389648 pod_ready.go:94] pod "kube-apiserver-addons-049370" is "Ready"
	I0904 20:58:09.886456  389648 pod_ready.go:86] duration metric: took 3.11401ms for pod "kube-apiserver-addons-049370" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:09.888020  389648 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-049370" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:10.273761  389648 pod_ready.go:94] pod "kube-controller-manager-addons-049370" is "Ready"
	I0904 20:58:10.273790  389648 pod_ready.go:86] duration metric: took 385.749346ms for pod "kube-controller-manager-addons-049370" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:10.473572  389648 pod_ready.go:83] waiting for pod "kube-proxy-k5lnm" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:10.873887  389648 pod_ready.go:94] pod "kube-proxy-k5lnm" is "Ready"
	I0904 20:58:10.873914  389648 pod_ready.go:86] duration metric: took 400.319117ms for pod "kube-proxy-k5lnm" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:11.074268  389648 pod_ready.go:83] waiting for pod "kube-scheduler-addons-049370" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:11.473936  389648 pod_ready.go:94] pod "kube-scheduler-addons-049370" is "Ready"
	I0904 20:58:11.473971  389648 pod_ready.go:86] duration metric: took 399.67197ms for pod "kube-scheduler-addons-049370" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 20:58:11.473987  389648 pod_ready.go:40] duration metric: took 1.604097075s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 20:58:11.514779  389648 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0904 20:58:11.516435  389648 out.go:179] * Done! kubectl is now configured to use "addons-049370" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 04 21:01:59 addons-049370 crio[1043]: time="2025-09-04 21:01:59.618864806Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=dc9c413c-a49c-4949-8e11-443a005710cf name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:02:14 addons-049370 crio[1043]: time="2025-09-04 21:02:14.383971398Z" level=info msg="Stopping pod sandbox: 370c8713c2061f1ecac56250f7932ef081c0c5c1ac9be336cba5cc32cdba66de" id=d602bfc3-9ae7-41fc-af80-720cf5c480c7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 04 21:02:14 addons-049370 crio[1043]: time="2025-09-04 21:02:14.384015714Z" level=info msg="Stopped pod sandbox (already stopped): 370c8713c2061f1ecac56250f7932ef081c0c5c1ac9be336cba5cc32cdba66de" id=d602bfc3-9ae7-41fc-af80-720cf5c480c7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 04 21:02:14 addons-049370 crio[1043]: time="2025-09-04 21:02:14.384279869Z" level=info msg="Removing pod sandbox: 370c8713c2061f1ecac56250f7932ef081c0c5c1ac9be336cba5cc32cdba66de" id=b17ed0bd-fa09-4b68-a5d7-21bebc5cf51f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 04 21:02:14 addons-049370 crio[1043]: time="2025-09-04 21:02:14.389391836Z" level=info msg="Removed pod sandbox: 370c8713c2061f1ecac56250f7932ef081c0c5c1ac9be336cba5cc32cdba66de" id=b17ed0bd-fa09-4b68-a5d7-21bebc5cf51f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 04 21:02:28 addons-049370 crio[1043]: time="2025-09-04 21:02:28.431607761Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=84bc6298-7272-4c65-9ce3-e37faee44064 name=/runtime.v1.ImageService/PullImage
	Sep 04 21:02:28 addons-049370 crio[1043]: time="2025-09-04 21:02:28.446006249Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Sep 04 21:02:59 addons-049370 crio[1043]: time="2025-09-04 21:02:59.138702067Z" level=info msg="Pulling image: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=5d7168f5-e905-4dcc-abd5-cf770f446239 name=/runtime.v1.ImageService/PullImage
	Sep 04 21:02:59 addons-049370 crio[1043]: time="2025-09-04 21:02:59.142103720Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	Sep 04 21:03:12 addons-049370 crio[1043]: time="2025-09-04 21:03:12.978563499Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=fc08c487-a0bb-4f35-94f2-495aea07524e name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:03:12 addons-049370 crio[1043]: time="2025-09-04 21:03:12.978890155Z" level=info msg="Image docker.io/nginx:alpine not found" id=fc08c487-a0bb-4f35-94f2-495aea07524e name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:03:23 addons-049370 crio[1043]: time="2025-09-04 21:03:23.978489059Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=34ed16c2-7a7a-4780-8592-c5fac9f5f298 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:03:23 addons-049370 crio[1043]: time="2025-09-04 21:03:23.978789742Z" level=info msg="Image docker.io/nginx:alpine not found" id=34ed16c2-7a7a-4780-8592-c5fac9f5f298 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:03:29 addons-049370 crio[1043]: time="2025-09-04 21:03:29.792807884Z" level=info msg="Pulling image: docker.io/nginx:latest" id=3162ef9d-efd8-4d0d-af97-6fbe20f89e24 name=/runtime.v1.ImageService/PullImage
	Sep 04 21:03:29 addons-049370 crio[1043]: time="2025-09-04 21:03:29.796436092Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Sep 04 21:03:30 addons-049370 crio[1043]: time="2025-09-04 21:03:30.517487489Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=03d20a44-a068-443e-b1b5-dbaf55a411e9 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:03:30 addons-049370 crio[1043]: time="2025-09-04 21:03:30.517844235Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=03d20a44-a068-443e-b1b5-dbaf55a411e9 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:03:36 addons-049370 crio[1043]: time="2025-09-04 21:03:36.978073879Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=b5ec0976-421d-4554-940b-e27aa72bf5a6 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:03:36 addons-049370 crio[1043]: time="2025-09-04 21:03:36.978369963Z" level=info msg="Image docker.io/nginx:alpine not found" id=b5ec0976-421d-4554-940b-e27aa72bf5a6 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:03:44 addons-049370 crio[1043]: time="2025-09-04 21:03:44.977944693Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=302d6273-ed57-4f2c-855e-c09858ecadd8 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:03:44 addons-049370 crio[1043]: time="2025-09-04 21:03:44.978218340Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=302d6273-ed57-4f2c-855e-c09858ecadd8 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:03:47 addons-049370 crio[1043]: time="2025-09-04 21:03:47.978127071Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=51192e80-0d1a-4b82-8cee-756759ecf36b name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:03:47 addons-049370 crio[1043]: time="2025-09-04 21:03:47.978435226Z" level=info msg="Image docker.io/nginx:alpine not found" id=51192e80-0d1a-4b82-8cee-756759ecf36b name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:04:00 addons-049370 crio[1043]: time="2025-09-04 21:04:00.451710274Z" level=info msg="Pulling image: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=2624ab5b-cc49-49e1-9f08-ded890f628ca name=/runtime.v1.ImageService/PullImage
	Sep 04 21:04:00 addons-049370 crio[1043]: time="2025-09-04 21:04:00.468036869Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	0812830cff5e8       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          5 minutes ago       Running             busybox                                  0                   9db653f3755b4       busybox
	821cbe3252d57       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          6 minutes ago       Running             csi-snapshotter                          0                   f325578cefe27       csi-hostpathplugin-98s7l
	cb89aa1bc3c60       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          6 minutes ago       Running             csi-provisioner                          0                   f325578cefe27       csi-hostpathplugin-98s7l
	c838f8e9fc3db       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            6 minutes ago       Running             liveness-probe                           0                   f325578cefe27       csi-hostpathplugin-98s7l
	d0e1e178f59da       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           6 minutes ago       Running             hostpath                                 0                   f325578cefe27       csi-hostpathplugin-98s7l
	2739b36b07e33       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                6 minutes ago       Running             node-driver-registrar                    0                   f325578cefe27       csi-hostpathplugin-98s7l
	71f3c44efa7ed       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef                             6 minutes ago       Running             controller                               0                   616b907580ffe       ingress-nginx-controller-9cc49f96f-9hj2l
	7edf2c6fe20a3       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506                            6 minutes ago       Running             gadget                                   0                   c4ec61756e1cd       gadget-whkft
	c3cf3d964594d       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   6 minutes ago       Running             csi-external-health-monitor-controller   0                   f325578cefe27       csi-hostpathplugin-98s7l
	5c7cafdaee154       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      6 minutes ago       Running             volume-snapshot-controller               0                   c6e94adfea087       snapshot-controller-7d9fbc56b8-mgxvk
	cf878cd883800       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      6 minutes ago       Running             volume-snapshot-controller               0                   a16557be7ddd8       snapshot-controller-7d9fbc56b8-5d9jh
	3ba8ba2525962       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   6 minutes ago       Exited              patch                                    0                   6e76b5fa98c54       ingress-nginx-admission-patch-gtdvl
	712fefc65d0c1       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             6 minutes ago       Running             csi-attacher                             0                   06eca301ea94b       csi-hostpath-attacher-0
	4d5989f69feeb       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   6 minutes ago       Exited              create                                   0                   8ab625b3a8d0f       ingress-nginx-admission-create-bcplk
	56891fc3e82a7       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              6 minutes ago       Running             csi-resizer                              0                   cd89e12ceb21a       csi-hostpath-resizer-0
	7d2cafb9fbef5       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               6 minutes ago       Running             minikube-ingress-dns                     0                   ab8997b22bdfa       kube-ingress-dns-minikube
	ae86f0dc5f527       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             7 minutes ago       Running             local-path-provisioner                   0                   88ad798d96077       local-path-provisioner-648f6765c9-dlgrh
	5a078a0cc821d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             7 minutes ago       Running             storage-provisioner                      0                   789a7bd2ea563       storage-provisioner
	f34769614a539       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             7 minutes ago       Running             coredns                                  0                   4201e6440890f       coredns-66bc5c9577-m8z9t
	c934f0f4b966c       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             7 minutes ago       Running             kindnet-cni                              0                   15477ade7fdb4       kindnet-7bfb9
	f6a9e9c72d6ba       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                                             7 minutes ago       Running             kube-proxy                               0                   8022b4762a732       kube-proxy-k5lnm
	3f2b5739caaa5       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             8 minutes ago       Running             etcd                                     0                   a0a640c2dfdf7       etcd-addons-049370
	c29c83b9956a1       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                                             8 minutes ago       Running             kube-scheduler                           0                   dcb7c5c1869a2       kube-scheduler-addons-049370
	c5667de904598       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                                             8 minutes ago       Running             kube-controller-manager                  0                   8e65b647d075e       kube-controller-manager-addons-049370
	e754d67808d98       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                                             8 minutes ago       Running             kube-apiserver                           0                   ded69ea3b436b       kube-apiserver-addons-049370
	
	
	==> coredns [f34769614a539f8a9deabe583e02287082f6ea11bf18d071546e1a719cab9a53] <==
	[INFO] 10.244.0.19:55773 - 30973 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.004937086s
	[INFO] 10.244.0.19:54298 - 21135 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004015449s
	[INFO] 10.244.0.19:54298 - 21393 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005610527s
	[INFO] 10.244.0.19:47617 - 25189 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.0041739s
	[INFO] 10.244.0.19:47617 - 25434 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004264819s
	[INFO] 10.244.0.19:52948 - 36411 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000103838s
	[INFO] 10.244.0.19:52948 - 36178 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000155332s
	[INFO] 10.244.0.22:41475 - 45724 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000188468s
	[INFO] 10.244.0.22:38337 - 8650 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000284766s
	[INFO] 10.244.0.22:56826 - 5154 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000151577s
	[INFO] 10.244.0.22:34009 - 20877 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000175171s
	[INFO] 10.244.0.22:47086 - 56751 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000091852s
	[INFO] 10.244.0.22:45919 - 6012 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000095085s
	[INFO] 10.244.0.22:37872 - 33595 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003127291s
	[INFO] 10.244.0.22:58544 - 3234 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003586803s
	[INFO] 10.244.0.22:34813 - 19895 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.004201792s
	[INFO] 10.244.0.22:60988 - 58217 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.005450137s
	[INFO] 10.244.0.22:55463 - 22980 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005077908s
	[INFO] 10.244.0.22:35577 - 40764 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005725827s
	[INFO] 10.244.0.22:55201 - 19501 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00586828s
	[INFO] 10.244.0.22:47590 - 18187 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.009774018s
	[INFO] 10.244.0.22:43687 - 40215 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000894336s
	[INFO] 10.244.0.22:40249 - 16957 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001679777s
	[INFO] 10.244.0.26:43025 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000196657s
	[INFO] 10.244.0.26:48775 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00015597s
	
	
	==> describe nodes <==
	Name:               addons-049370
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-049370
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d82f852837f628b3930700b19196c39855cd258a
	                    minikube.k8s.io/name=addons-049370
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_04T20_56_14_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-049370
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-049370"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Sep 2025 20:56:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-049370
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Sep 2025 21:04:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Sep 2025 20:59:17 +0000   Thu, 04 Sep 2025 20:56:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Sep 2025 20:59:17 +0000   Thu, 04 Sep 2025 20:56:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Sep 2025 20:59:17 +0000   Thu, 04 Sep 2025 20:56:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Sep 2025 20:59:17 +0000   Thu, 04 Sep 2025 20:57:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-049370
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a303a7b5bdc4444fa740fba6d81d7a69
	  System UUID:                e0421e3f-022c-4346-89b0-92bd27eff9ea
	  Boot ID:                    d34ed5fc-a148-45de-9a0e-f744d5f792e8
	  Kernel Version:             5.15.0-1083-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m41s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  gadget                      gadget-whkft                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m46s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-9hj2l                      100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         7m46s
	  kube-system                 coredns-66bc5c9577-m8z9t                                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m50s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m45s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m45s
	  kube-system                 csi-hostpathplugin-98s7l                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m7s
	  kube-system                 etcd-addons-049370                                            100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m56s
	  kube-system                 kindnet-7bfb9                                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m51s
	  kube-system                 kube-apiserver-addons-049370                                  250m (3%)     0 (0%)      0 (0%)           0 (0%)         7m57s
	  kube-system                 kube-controller-manager-addons-049370                         200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m56s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m47s
	  kube-system                 kube-proxy-k5lnm                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m51s
	  kube-system                 kube-scheduler-addons-049370                                  100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m56s
	  kube-system                 snapshot-controller-7d9fbc56b8-5d9jh                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m44s
	  kube-system                 snapshot-controller-7d9fbc56b8-mgxvk                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m44s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m47s
	  local-path-storage          helper-pod-create-pvc-ca8c5764-63fd-4dd4-a9e0-9769c1ad0f4b    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	  local-path-storage          local-path-provisioner-648f6765c9-dlgrh                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 7m46s  kube-proxy       
	  Normal   Starting                 7m57s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m57s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7m56s  kubelet          Node addons-049370 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m56s  kubelet          Node addons-049370 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m56s  kubelet          Node addons-049370 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m52s  node-controller  Node addons-049370 event: Registered Node addons-049370 in Controller
	  Normal   NodeReady                7m7s   kubelet          Node addons-049370 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000004] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +0.000069] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000004] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +1.008573] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000007] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +0.000008] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000003] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000001] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +2.015727] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000007] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +0.000000] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000003] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +4.127589] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000006] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +0.000000] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000002] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +8.191103] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000006] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +0.000017] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000002] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	
	
	==> etcd [3f2b5739caaa53e307caf9baa0ce3898f9c7585d8d2ae3924c36566f18f3e2c1] <==
	{"level":"info","ts":"2025-09-04T20:56:22.648997Z","caller":"traceutil/trace.go:172","msg":"trace[1125510617] transaction","detail":"{read_only:false; number_of_response:1; response_revision:392; }","duration":"385.315266ms","start":"2025-09-04T20:56:22.263655Z","end":"2025-09-04T20:56:22.648970Z","steps":["trace[1125510617] 'process raft request'  (duration: 202.681038ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-04T20:56:22.649181Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-04T20:56:22.263638Z","time spent":"385.424915ms","remote":"127.0.0.1:58690","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":55,"response count":0,"response size":4404,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-66bc5c9577-nqwmj\" mod_revision:350 > success:<request_delete_range:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-nqwmj\" > > failure:<request_range:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-nqwmj\" > >"}
	{"level":"info","ts":"2025-09-04T20:56:22.649436Z","caller":"traceutil/trace.go:172","msg":"trace[495038974] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"284.907787ms","start":"2025-09-04T20:56:22.364514Z","end":"2025-09-04T20:56:22.649422Z","steps":["trace[495038974] 'process raft request'  (duration: 185.809853ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-04T20:56:26.851373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T20:56:26.859519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T20:56:48.396241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T20:56:48.402544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T20:56:48.546682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T20:56:48.553504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55094","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-04T20:57:34.056678Z","caller":"traceutil/trace.go:172","msg":"trace[1745603704] transaction","detail":"{read_only:false; response_revision:1077; number_of_response:1; }","duration":"195.917023ms","start":"2025-09-04T20:57:33.860734Z","end":"2025-09-04T20:57:34.056651Z","steps":["trace[1745603704] 'process raft request'  (duration: 195.733375ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T20:57:34.056844Z","caller":"traceutil/trace.go:172","msg":"trace[278916272] transaction","detail":"{read_only:false; response_revision:1079; number_of_response:1; }","duration":"106.019959ms","start":"2025-09-04T20:57:33.950812Z","end":"2025-09-04T20:57:34.056832Z","steps":["trace[278916272] 'process raft request'  (duration: 105.809945ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T20:57:34.057108Z","caller":"traceutil/trace.go:172","msg":"trace[325232300] transaction","detail":"{read_only:false; response_revision:1078; number_of_response:1; }","duration":"111.429632ms","start":"2025-09-04T20:57:33.945667Z","end":"2025-09-04T20:57:34.057097Z","steps":["trace[325232300] 'process raft request'  (duration: 110.913633ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T20:58:13.409295Z","caller":"traceutil/trace.go:172","msg":"trace[1613289946] transaction","detail":"{read_only:false; response_revision:1261; number_of_response:1; }","duration":"116.026957ms","start":"2025-09-04T20:58:13.293249Z","end":"2025-09-04T20:58:13.409276Z","steps":["trace[1613289946] 'process raft request'  (duration: 115.923335ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-04T20:58:30.444195Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"127.156243ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-04T20:58:30.444268Z","caller":"traceutil/trace.go:172","msg":"trace[447435360] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotclasses; range_end:; response_count:0; response_revision:1361; }","duration":"127.240307ms","start":"2025-09-04T20:58:30.317014Z","end":"2025-09-04T20:58:30.444254Z","steps":["trace[447435360] 'agreement among raft nodes before linearized reading'  (duration: 44.055437ms)","trace[447435360] 'range keys from in-memory index tree'  (duration: 83.073918ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-04T20:58:30.444209Z","caller":"traceutil/trace.go:172","msg":"trace[2038104867] transaction","detail":"{read_only:false; response_revision:1362; number_of_response:1; }","duration":"131.730807ms","start":"2025-09-04T20:58:30.312459Z","end":"2025-09-04T20:58:30.444190Z","steps":["trace[2038104867] 'process raft request'  (duration: 48.653692ms)","trace[2038104867] 'compare'  (duration: 82.954949ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-04T20:58:30.642255Z","caller":"traceutil/trace.go:172","msg":"trace[822073905] transaction","detail":"{read_only:false; response_revision:1367; number_of_response:1; }","duration":"111.76712ms","start":"2025-09-04T20:58:30.530471Z","end":"2025-09-04T20:58:30.642238Z","steps":["trace[822073905] 'process raft request'  (duration: 111.724252ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T20:58:30.642413Z","caller":"traceutil/trace.go:172","msg":"trace[444267485] transaction","detail":"{read_only:false; response_revision:1366; number_of_response:1; }","duration":"173.99749ms","start":"2025-09-04T20:58:30.468390Z","end":"2025-09-04T20:58:30.642388Z","steps":["trace[444267485] 'process raft request'  (duration: 79.867478ms)","trace[444267485] 'compare'  (duration: 93.793378ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-04T20:58:30.701102Z","caller":"traceutil/trace.go:172","msg":"trace[1596308249] transaction","detail":"{read_only:false; response_revision:1368; number_of_response:1; }","duration":"114.975474ms","start":"2025-09-04T20:58:30.586109Z","end":"2025-09-04T20:58:30.701084Z","steps":["trace[1596308249] 'process raft request'  (duration: 114.88463ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-04T20:58:30.831980Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.440231ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-04T20:58:30.832128Z","caller":"traceutil/trace.go:172","msg":"trace[1697395868] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:1369; }","duration":"126.599135ms","start":"2025-09-04T20:58:30.705510Z","end":"2025-09-04T20:58:30.832110Z","steps":["trace[1697395868] 'agreement among raft nodes before linearized reading'  (duration: 66.572708ms)","trace[1697395868] 'range keys from in-memory index tree'  (duration: 59.837538ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-04T20:58:30.832171Z","caller":"traceutil/trace.go:172","msg":"trace[645371951] transaction","detail":"{read_only:false; response_revision:1370; number_of_response:1; }","duration":"127.267701ms","start":"2025-09-04T20:58:30.704881Z","end":"2025-09-04T20:58:30.832149Z","steps":["trace[645371951] 'process raft request'  (duration: 67.258003ms)","trace[645371951] 'compare'  (duration: 59.843945ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-04T20:58:30.832360Z","caller":"traceutil/trace.go:172","msg":"trace[1978939171] transaction","detail":"{read_only:false; response_revision:1371; number_of_response:1; }","duration":"127.399776ms","start":"2025-09-04T20:58:30.704948Z","end":"2025-09-04T20:58:30.832348Z","steps":["trace[1978939171] 'process raft request'  (duration: 127.166902ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T20:58:30.832409Z","caller":"traceutil/trace.go:172","msg":"trace[1686127060] transaction","detail":"{read_only:false; response_revision:1372; number_of_response:1; }","duration":"126.865141ms","start":"2025-09-04T20:58:30.705526Z","end":"2025-09-04T20:58:30.832392Z","steps":["trace[1686127060] 'process raft request'  (duration: 126.765828ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T20:58:30.836024Z","caller":"traceutil/trace.go:172","msg":"trace[1408390840] transaction","detail":"{read_only:false; response_revision:1373; number_of_response:1; }","duration":"126.512815ms","start":"2025-09-04T20:58:30.705957Z","end":"2025-09-04T20:58:30.832469Z","steps":["trace[1408390840] 'process raft request'  (duration: 126.396725ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:04:11 up  2:46,  0 users,  load average: 0.36, 0.65, 0.54
	Linux addons-049370 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [c934f0f4b966c80bea5021ff2cd61d60fc1f09abb35b790b7fa2c052eb648772] <==
	I0904 21:02:03.568859       1 main.go:301] handling current node
	I0904 21:02:13.576821       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:02:13.576862       1 main.go:301] handling current node
	I0904 21:02:23.572836       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:02:23.572868       1 main.go:301] handling current node
	I0904 21:02:33.568965       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:02:33.568994       1 main.go:301] handling current node
	I0904 21:02:43.568046       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:02:43.568087       1 main.go:301] handling current node
	I0904 21:02:53.567616       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:02:53.567668       1 main.go:301] handling current node
	I0904 21:03:03.572833       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:03:03.572872       1 main.go:301] handling current node
	I0904 21:03:13.567698       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:03:13.567736       1 main.go:301] handling current node
	I0904 21:03:23.572888       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:03:23.572931       1 main.go:301] handling current node
	I0904 21:03:33.568832       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:03:33.568861       1 main.go:301] handling current node
	I0904 21:03:43.567370       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:03:43.567398       1 main.go:301] handling current node
	I0904 21:03:53.567483       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:03:53.567532       1 main.go:301] handling current node
	I0904 21:04:03.569552       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:04:03.569594       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e754d67808d98a38d816120e6f2508d9bc342968fa147d926ff9d362a0796737] <==
	I0904 20:57:27.380026       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0904 20:57:31.385031       1 handler_proxy.go:99] no RequestInfo found in the context
	E0904 20:57:31.385076       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.22.155:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.22.155:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	E0904 20:57:31.385084       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0904 20:57:31.395899       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0904 20:57:38.465684       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0904 20:58:21.163245       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:60624: use of closed network connection
	E0904 20:58:21.316649       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:60648: use of closed network connection
	I0904 20:58:29.788002       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0904 20:58:29.995932       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.217.161"}
	I0904 20:58:30.465191       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.103.10"}
	I0904 20:58:31.764493       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 20:58:45.951516       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 20:59:32.395923       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0904 20:59:34.466474       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:00:11.575196       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:00:37.217035       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:01:12.003193       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:01:57.163986       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:02:17.079567       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:03:09.847622       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:03:20.654283       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [c5667de904598d16bc7b2fd5cfcd19280dc33b7d377dd608e1fc9961af9c518c] <==
	I0904 20:56:18.365258       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0904 20:56:18.365795       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0904 20:56:18.366547       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0904 20:56:18.367665       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0904 20:56:18.367674       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0904 20:56:18.378969       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0904 20:56:18.429609       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0904 20:56:18.463438       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0904 20:56:18.463461       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0904 20:56:18.463467       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0904 20:56:18.530779       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0904 20:56:23.945861       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E0904 20:56:48.371672       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 20:56:48.371810       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0904 20:56:48.371857       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0904 20:56:48.472497       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0904 20:56:48.537614       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0904 20:56:48.541310       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0904 20:56:48.641464       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0904 20:57:08.353437       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E0904 20:57:18.477549       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 20:57:18.648659       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0904 20:58:34.483247       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I0904 20:59:09.361673       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	I0904 20:59:22.484206       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	
	
	==> kube-proxy [f6a9e9c72d6babda359c890098381bd848b231b9b281facb3f3cdc9763aee908] <==
	I0904 20:56:23.263174       1 server_linux.go:53] "Using iptables proxy"
	I0904 20:56:23.846890       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0904 20:56:23.948000       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0904 20:56:23.948116       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0904 20:56:23.948247       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 20:56:24.347256       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0904 20:56:24.347395       1 server_linux.go:132] "Using iptables Proxier"
	I0904 20:56:24.361570       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 20:56:24.362683       1 server.go:527] "Version info" version="v1.34.0"
	I0904 20:56:24.362781       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 20:56:24.364537       1 config.go:200] "Starting service config controller"
	I0904 20:56:24.364555       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0904 20:56:24.364576       1 config.go:106] "Starting endpoint slice config controller"
	I0904 20:56:24.364583       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0904 20:56:24.364619       1 config.go:403] "Starting serviceCIDR config controller"
	I0904 20:56:24.364629       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0904 20:56:24.365511       1 config.go:309] "Starting node config controller"
	I0904 20:56:24.365557       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0904 20:56:24.365570       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0904 20:56:24.465478       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0904 20:56:24.465535       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0904 20:56:24.465550       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [c29c83b9956a13fe199c44a49b15dba2a1c0c21d5ba02c6402f6f23568614412] <==
	E0904 20:56:11.467729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0904 20:56:11.473120       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0904 20:56:11.473221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0904 20:56:11.473483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0904 20:56:11.473678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0904 20:56:11.473762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0904 20:56:11.473851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0904 20:56:11.473905       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0904 20:56:11.473951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0904 20:56:11.474028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0904 20:56:11.474102       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0904 20:56:11.474173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0904 20:56:11.474244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0904 20:56:11.474321       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0904 20:56:11.474380       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0904 20:56:11.474468       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0904 20:56:11.474521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0904 20:56:11.475320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0904 20:56:11.478116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0904 20:56:12.362165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0904 20:56:12.378813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0904 20:56:12.396736       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0904 20:56:12.484405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0904 20:56:12.588679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0904 20:56:15.667534       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 04 21:03:14 addons-049370 kubelet[1676]: E0904 21:03:14.178655    1676 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/c0ba448d45483231b803740bbbc996622724deac82183d7c39721c74e011eb5e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/c0ba448d45483231b803740bbbc996622724deac82183d7c39721c74e011eb5e/diff: no such file or directory, extraDiskErr: <nil>
	Sep 04 21:03:14 addons-049370 kubelet[1676]: E0904 21:03:14.316859    1676 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757019794316605380  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:518983}  inodes_used:{value:206}}"
	Sep 04 21:03:14 addons-049370 kubelet[1676]: E0904 21:03:14.316894    1676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757019794316605380  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:518983}  inodes_used:{value:206}}"
	Sep 04 21:03:23 addons-049370 kubelet[1676]: E0904 21:03:23.979079    1676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="fd6a62c3-3f28-47de-b93e-6a4222d72423"
	Sep 04 21:03:24 addons-049370 kubelet[1676]: E0904 21:03:24.318971    1676 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757019804318755984  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:518983}  inodes_used:{value:206}}"
	Sep 04 21:03:24 addons-049370 kubelet[1676]: E0904 21:03:24.319002    1676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757019804318755984  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:518983}  inodes_used:{value:206}}"
	Sep 04 21:03:24 addons-049370 kubelet[1676]: I0904 21:03:24.977488    1676 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 04 21:03:29 addons-049370 kubelet[1676]: E0904 21:03:29.792324    1676 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 04 21:03:29 addons-049370 kubelet[1676]: E0904 21:03:29.792388    1676 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 04 21:03:29 addons-049370 kubelet[1676]: E0904 21:03:29.792627    1676 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-ca8c5764-63fd-4dd4-a9e0-9769c1ad0f4b_local-path-storage(96904e25-b0d6-4506-8c7c-03307f38bc2b): ErrImagePull: reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 04 21:03:29 addons-049370 kubelet[1676]: E0904 21:03:29.792734    1676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-ca8c5764-63fd-4dd4-a9e0-9769c1ad0f4b" podUID="96904e25-b0d6-4506-8c7c-03307f38bc2b"
	Sep 04 21:03:30 addons-049370 kubelet[1676]: E0904 21:03:30.518167    1676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-ca8c5764-63fd-4dd4-a9e0-9769c1ad0f4b" podUID="96904e25-b0d6-4506-8c7c-03307f38bc2b"
	Sep 04 21:03:34 addons-049370 kubelet[1676]: E0904 21:03:34.320973    1676 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757019814320741535  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:518983}  inodes_used:{value:206}}"
	Sep 04 21:03:34 addons-049370 kubelet[1676]: E0904 21:03:34.321012    1676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757019814320741535  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:518983}  inodes_used:{value:206}}"
	Sep 04 21:03:36 addons-049370 kubelet[1676]: E0904 21:03:36.978728    1676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="fd6a62c3-3f28-47de-b93e-6a4222d72423"
	Sep 04 21:03:44 addons-049370 kubelet[1676]: E0904 21:03:44.323141    1676 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757019824322921263  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:518983}  inodes_used:{value:206}}"
	Sep 04 21:03:44 addons-049370 kubelet[1676]: E0904 21:03:44.323173    1676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757019824322921263  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:518983}  inodes_used:{value:206}}"
	Sep 04 21:03:54 addons-049370 kubelet[1676]: E0904 21:03:54.325677    1676 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757019834325446252  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:518983}  inodes_used:{value:206}}"
	Sep 04 21:03:54 addons-049370 kubelet[1676]: E0904 21:03:54.325709    1676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757019834325446252  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:518983}  inodes_used:{value:206}}"
	Sep 04 21:04:00 addons-049370 kubelet[1676]: E0904 21:04:00.451251    1676 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 04 21:04:00 addons-049370 kubelet[1676]: E0904 21:04:00.451312    1676 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 04 21:04:00 addons-049370 kubelet[1676]: E0904 21:04:00.451493    1676 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(76e4007b-c8c9-43e1-882d-36f7c6c048cc): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 04 21:04:00 addons-049370 kubelet[1676]: E0904 21:04:00.451545    1676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="76e4007b-c8c9-43e1-882d-36f7c6c048cc"
	Sep 04 21:04:04 addons-049370 kubelet[1676]: E0904 21:04:04.327374    1676 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757019844327142103  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:518983}  inodes_used:{value:206}}"
	Sep 04 21:04:04 addons-049370 kubelet[1676]: E0904 21:04:04.327421    1676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757019844327142103  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:518983}  inodes_used:{value:206}}"
	
	
	==> storage-provisioner [5a078a0cc821dc014bcb985333d5bbfa410ad383f9567686488e54f4bdadf77c] <==
	W0904 21:03:46.652813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:03:48.656108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:03:48.661370       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:03:50.663780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:03:50.667286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:03:52.670193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:03:52.674799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:03:54.677588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:03:54.682306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:03:56.684768       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:03:56.688076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:03:58.690417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:03:58.694770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:04:00.697276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:04:00.700836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:04:02.703645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:04:02.707282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:04:04.710129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:04:04.713606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:04:06.716871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:04:06.721844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:04:08.724610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:04:08.728518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:04:10.732298       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:04:10.737080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-049370 -n addons-049370
helpers_test.go:269: (dbg) Run:  kubectl --context addons-049370 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-bcplk ingress-nginx-admission-patch-gtdvl helper-pod-create-pvc-ca8c5764-63fd-4dd4-a9e0-9769c1ad0f4b
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-049370 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-bcplk ingress-nginx-admission-patch-gtdvl helper-pod-create-pvc-ca8c5764-63fd-4dd4-a9e0-9769c1ad0f4b
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-049370 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-bcplk ingress-nginx-admission-patch-gtdvl helper-pod-create-pvc-ca8c5764-63fd-4dd4-a9e0-9769c1ad0f4b: exit status 1 (78.210666ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-049370/192.168.49.2
	Start Time:       Thu, 04 Sep 2025 20:58:29 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.24
	IPs:
	  IP:  10.244.0.24
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6ptm9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6ptm9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  5m42s                default-scheduler  Successfully assigned default/nginx to addons-049370
	  Warning  Failed     5m10s                kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     72s (x3 over 5m10s)  kubelet            Error: ErrImagePull
	  Warning  Failed     72s (x2 over 3m15s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    35s (x5 over 5m10s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     35s (x5 over 5m10s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    24s (x4 over 5m41s)  kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-049370/192.168.49.2
	Start Time:       Thu, 04 Sep 2025 20:59:14 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hr2vm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-hr2vm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  4m57s                default-scheduler  Successfully assigned default/task-pv-pod to addons-049370
	  Warning  Failed     103s                 kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:f15190cd0aed34df2541e6a569d349858dd83fe2a519d7c0ec023133b6d3c4f7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    89s (x2 over 3m45s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     89s (x2 over 3m45s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    76s (x3 over 4m56s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     11s (x2 over 3m46s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     11s (x3 over 3m46s)  kubelet            Error: ErrImagePull
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hzwmn (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-hzwmn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-bcplk" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-gtdvl" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-ca8c5764-63fd-4dd4-a9e0-9769c1ad0f4b" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-049370 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-bcplk ingress-nginx-admission-patch-gtdvl helper-pod-create-pvc-ca8c5764-63fd-4dd4-a9e0-9769c1ad0f4b: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-049370 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/LocalPath (302.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-434682 --alsologtostderr -v=1]
E0904 21:18:12.066205  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-434682 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-434682 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-434682 --alsologtostderr -v=1] stderr:
I0904 21:17:40.649454  440042 out.go:360] Setting OutFile to fd 1 ...
I0904 21:17:40.649575  440042 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 21:17:40.649587  440042 out.go:374] Setting ErrFile to fd 2...
I0904 21:17:40.649590  440042 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 21:17:40.649778  440042 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-384635/.minikube/bin
I0904 21:17:40.650036  440042 mustload.go:65] Loading cluster: functional-434682
I0904 21:17:40.650428  440042 config.go:182] Loaded profile config "functional-434682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 21:17:40.650827  440042 cli_runner.go:164] Run: docker container inspect functional-434682 --format={{.State.Status}}
I0904 21:17:40.670904  440042 host.go:66] Checking if "functional-434682" exists ...
I0904 21:17:40.671183  440042 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0904 21:17:40.721395  440042 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-09-04 21:17:40.712008293 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0904 21:17:40.721506  440042 api_server.go:166] Checking apiserver status ...
I0904 21:17:40.721576  440042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0904 21:17:40.721615  440042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-434682
I0904 21:17:40.741950  440042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33155 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/functional-434682/id_rsa Username:docker}
I0904 21:17:40.834808  440042 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5499/cgroup
I0904 21:17:40.843015  440042 api_server.go:182] apiserver freezer: "9:freezer:/docker/c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e/crio/crio-d1ff769d1c0c6befb61371579e156cb8df3f874152f997a641d98dbfa7a31c3d"
I0904 21:17:40.843082  440042 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e/crio/crio-d1ff769d1c0c6befb61371579e156cb8df3f874152f997a641d98dbfa7a31c3d/freezer.state
I0904 21:17:40.850485  440042 api_server.go:204] freezer state: "THAWED"
I0904 21:17:40.850514  440042 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I0904 21:17:40.855412  440042 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W0904 21:17:40.855447  440042 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I0904 21:17:40.855588  440042 config.go:182] Loaded profile config "functional-434682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 21:17:40.855611  440042 addons.go:69] Setting dashboard=true in profile "functional-434682"
I0904 21:17:40.855624  440042 addons.go:238] Setting addon dashboard=true in "functional-434682"
I0904 21:17:40.855648  440042 host.go:66] Checking if "functional-434682" exists ...
I0904 21:17:40.855939  440042 cli_runner.go:164] Run: docker container inspect functional-434682 --format={{.State.Status}}
I0904 21:17:40.875624  440042 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0904 21:17:40.876724  440042 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0904 21:17:40.877792  440042 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0904 21:17:40.877823  440042 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0904 21:17:40.877909  440042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-434682
I0904 21:17:40.895493  440042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33155 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/functional-434682/id_rsa Username:docker}
I0904 21:17:40.989465  440042 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0904 21:17:40.989497  440042 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0904 21:17:41.005202  440042 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0904 21:17:41.005230  440042 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0904 21:17:41.021530  440042 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0904 21:17:41.021564  440042 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0904 21:17:41.037222  440042 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0904 21:17:41.037247  440042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0904 21:17:41.053178  440042 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0904 21:17:41.053202  440042 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0904 21:17:41.069016  440042 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0904 21:17:41.069043  440042 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0904 21:17:41.084612  440042 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0904 21:17:41.084638  440042 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0904 21:17:41.100473  440042 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0904 21:17:41.100497  440042 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0904 21:17:41.115768  440042 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0904 21:17:41.115832  440042 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0904 21:17:41.131140  440042 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0904 21:17:41.585300  440042 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-434682 addons enable metrics-server

                                                
                                                
I0904 21:17:41.586400  440042 addons.go:201] Writing out "functional-434682" config to set dashboard=true...
W0904 21:17:41.586676  440042 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I0904 21:17:41.587515  440042 kapi.go:59] client config for functional-434682: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/client.crt", KeyFile:"/home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/client.key", CAFile:"/home/jenkins/minikube-integration/21490-384635/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25a3920), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0904 21:17:41.588080  440042 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0904 21:17:41.588101  440042 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0904 21:17:41.588111  440042 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0904 21:17:41.588117  440042 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0904 21:17:41.588133  440042 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0904 21:17:41.595773  440042 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  da7b5967-359d-4d86-8000-234d4f4feade 1329 0 2025-09-04 21:17:41 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-09-04 21:17:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.103.253.138,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.103.253.138],IPFamilies:[IPv4],AllocateLoadBalan
cerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0904 21:17:41.595925  440042 out.go:285] * Launching proxy ...
* Launching proxy ...
I0904 21:17:41.595978  440042 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-434682 proxy --port 36195]
I0904 21:17:41.596196  440042 dashboard.go:157] Waiting for kubectl to output host:port ...
I0904 21:17:41.639007  440042 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W0904 21:17:41.639057  440042 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I0904 21:17:41.647887  440042 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1ddb7ef7-63b5-4252-bd14-27fe3426762f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 21:17:41 GMT]] Body:0xc00069fb80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00045cb40 TLS:<nil>}
I0904 21:17:41.648003  440042 retry.go:31] will retry after 91.828µs: Temporary Error: unexpected response code: 503
I0904 21:17:41.651539  440042 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[89f5b58e-c492-4225-b029-9df9a428d8c3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 21:17:41 GMT]] Body:0xc00183e840 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00045cc80 TLS:<nil>}
I0904 21:17:41.651600  440042 retry.go:31] will retry after 126.057µs: Temporary Error: unexpected response code: 503
I0904 21:17:41.654936  440042 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[63e874a1-c519-4afe-886f-f7eedc6b368f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 21:17:41 GMT]] Body:0xc000afccc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00055ac80 TLS:<nil>}
I0904 21:17:41.654992  440042 retry.go:31] will retry after 248.501µs: Temporary Error: unexpected response code: 503
I0904 21:17:41.658206  440042 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cd3c612c-83b8-42f6-96a7-7cbf7f17ef0a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 21:17:41 GMT]] Body:0xc00069fc80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0018a0500 TLS:<nil>}
I0904 21:17:41.658263  440042 retry.go:31] will retry after 287.056µs: Temporary Error: unexpected response code: 503
I0904 21:17:41.661350  440042 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4ea29422-5c03-445e-91ce-b3000b225f50] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 21:17:41 GMT]] Body:0xc000afcdc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00045cdc0 TLS:<nil>}
I0904 21:17:41.661398  440042 retry.go:31] will retry after 384.419µs: Temporary Error: unexpected response code: 503
I0904 21:17:41.664475  440042 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5a8a81d5-1707-4e2f-88f9-a554dcf69e0b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 21:17:41 GMT]] Body:0xc00183e940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0018a0640 TLS:<nil>}
I0904 21:17:41.664532  440042 retry.go:31] will retry after 728.755µs: Temporary Error: unexpected response code: 503
I0904 21:17:41.667349  440042 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1f7bdc57-e537-4b82-ad48-7d855031b14b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 21:17:41 GMT]] Body:0xc00183ea40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00055adc0 TLS:<nil>}
I0904 21:17:41.667388  440042 retry.go:31] will retry after 1.572684ms: Temporary Error: unexpected response code: 503
I0904 21:17:41.671304  440042 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6fbf0a80-86af-4ca7-8fd7-102c0da10c9a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 21:17:41 GMT]] Body:0xc000afcec0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00055b040 TLS:<nil>}
I0904 21:17:41.671348  440042 retry.go:31] will retry after 2.203542ms: Temporary Error: unexpected response code: 503
I0904 21:17:41.676157  440042 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b53a25f8-e7de-475a-a71e-f0d9e50e1c7a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 21:17:41 GMT]] Body:0xc00183eb00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0018a0780 TLS:<nil>}
I0904 21:17:41.676205  440042 retry.go:31] will retry after 3.360255ms: Temporary Error: unexpected response code: 503
I0904 21:17:41.682202  440042 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ae55699f-284e-44a7-9416-abb3e89f6b19] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 21:17:41 GMT]] Body:0xc000afcfc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00055b680 TLS:<nil>}
I0904 21:17:41.682251  440042 retry.go:31] will retry after 4.764468ms: Temporary Error: unexpected response code: 503
I0904 21:17:41.689232  440042 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7d800807-b9fc-48f3-974b-4c953b1b0720] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 21:17:41 GMT]] Body:0xc00183ec00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0018a08c0 TLS:<nil>}
I0904 21:17:41.689278  440042 retry.go:31] will retry after 7.218549ms: Temporary Error: unexpected response code: 503
I0904 21:17:41.699339  440042 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9ead0187-ffd9-405a-9407-a6176eea0a73] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 21:17:41 GMT]] Body:0xc00183ecc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00055b7c0 TLS:<nil>}
I0904 21:17:41.699382  440042 retry.go:31] will retry after 12.079796ms: Temporary Error: unexpected response code: 503
I0904 21:17:41.714289  440042 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[41aafc0d-4479-4780-9cc6-ad6660f5b1b9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 21:17:41 GMT]] Body:0xc000afd0c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00055b900 TLS:<nil>}
I0904 21:17:41.714323  440042 retry.go:31] will retry after 13.158424ms: Temporary Error: unexpected response code: 503
I0904 21:17:41.730438  440042 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[dc8386c1-d9e4-4448-8744-9b0f8b99d41f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 21:17:41 GMT]] Body:0xc00069fe00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0018a0a00 TLS:<nil>}
I0904 21:17:41.730489  440042 retry.go:31] will retry after 20.657524ms: Temporary Error: unexpected response code: 503
I0904 21:17:41.753921  440042 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d9d26e97-6b64-4fc6-8d34-2bc18916426b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 21:17:41 GMT]] Body:0xc000afd1c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00045cf00 TLS:<nil>}
I0904 21:17:41.753984  440042 retry.go:31] will retry after 33.873773ms: Temporary Error: unexpected response code: 503
I0904 21:17:41.790566  440042 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[613be7ba-5a8c-462c-b6a9-f2c17befc184] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 21:17:41 GMT]] Body:0xc00069ff00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0018a0b40 TLS:<nil>}
I0904 21:17:41.790627  440042 retry.go:31] will retry after 52.880225ms: Temporary Error: unexpected response code: 503
I0904 21:17:41.846892  440042 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b0ae08e8-9e0b-4589-87eb-11d61bc1e6d9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 21:17:41 GMT]] Body:0xc00183ee00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00045d040 TLS:<nil>}
I0904 21:17:41.846965  440042 retry.go:31] will retry after 45.250291ms: Temporary Error: unexpected response code: 503
I0904 21:17:41.896069  440042 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[656f7620-2439-4783-8e42-55c889acb0b6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 21:17:41 GMT]] Body:0xc000afd280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00045d2c0 TLS:<nil>}
I0904 21:17:41.896154  440042 retry.go:31] will retry after 64.229319ms: Temporary Error: unexpected response code: 503
I0904 21:17:41.964113  440042 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cbdc584c-ef1e-4771-85ff-d6b182c28688] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 21:17:41 GMT]] Body:0xc001894100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0018a0c80 TLS:<nil>}
I0904 21:17:41.964193  440042 retry.go:31] will retry after 79.494792ms: Temporary Error: unexpected response code: 503
I0904 21:17:42.047325  440042 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c231873b-9c32-479a-b6a8-1fc4748b17bf] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 21:17:42 GMT]] Body:0xc00183ee80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00045d400 TLS:<nil>}
I0904 21:17:42.047400  440042 retry.go:31] will retry after 237.427258ms: Temporary Error: unexpected response code: 503
I0904 21:17:42.288554  440042 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f730b90d-6b22-428e-9bde-81be31115a28] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 21:17:42 GMT]] Body:0xc001894200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00055ba40 TLS:<nil>}
I0904 21:17:42.288619  440042 retry.go:31] will retry after 224.8594ms: Temporary Error: unexpected response code: 503
I0904 21:17:42.516802  440042 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a5e05b5e-1e60-4c0b-a3dd-86dd8e3e29cd] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 21:17:42 GMT]] Body:0xc00183ef40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00045d680 TLS:<nil>}
I0904 21:17:42.516865  440042 retry.go:31] will retry after 667.316126ms: Temporary Error: unexpected response code: 503
I0904 21:17:43.187495  440042 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3092cbdc-221d-4205-bee8-01c79878e076] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 21:17:43 GMT]] Body:0xc001894300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00055bb80 TLS:<nil>}
I0904 21:17:43.187586  440042 retry.go:31] will retry after 999.834062ms: Temporary Error: unexpected response code: 503
I0904 21:17:44.190405  440042 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5625bb3c-7df0-406a-8e01-40ac14f392cb] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 21:17:44 GMT]] Body:0xc000afd3c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00045d7c0 TLS:<nil>}
I0904 21:17:44.190467  440042 retry.go:31] will retry after 1.05438531s: Temporary Error: unexpected response code: 503
I0904 21:17:45.248572  440042 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[30a82241-e589-47a7-b2c3-86f180ce1303] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 21:17:45 GMT]] Body:0xc0018943c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0018a0dc0 TLS:<nil>}
I0904 21:17:45.248636  440042 retry.go:31] will retry after 1.710653023s: Temporary Error: unexpected response code: 503
I0904 21:17:46.962956  440042 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5bf1d289-5197-493d-bbad-34308cc36020] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 21:17:46 GMT]] Body:0xc0018944c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00045d900 TLS:<nil>}
I0904 21:17:46.963017  440042 retry.go:31] will retry after 2.403497359s: Temporary Error: unexpected response code: 503
I0904 21:17:49.369414  440042 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4eab5635-4f48-4a16-9c1e-3575ab4e8e55] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 21:17:49 GMT]] Body:0xc000afd4c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00045da40 TLS:<nil>}
I0904 21:17:49.369496  440042 retry.go:31] will retry after 3.688295583s: Temporary Error: unexpected response code: 503
I0904 21:17:53.062003  440042 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ba55173b-823c-476b-b4ef-a383fe06ba2c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 21:17:53 GMT]] Body:0xc00183f080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00045db80 TLS:<nil>}
I0904 21:17:53.062069  440042 retry.go:31] will retry after 4.643750722s: Temporary Error: unexpected response code: 503
I0904 21:17:57.710157  440042 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[350925fe-aca6-44a2-b613-f025f07c086f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 21:17:57 GMT]] Body:0xc000ace440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001984000 TLS:<nil>}
I0904 21:17:57.710244  440042 retry.go:31] will retry after 8.446002244s: Temporary Error: unexpected response code: 503
I0904 21:18:06.159166  440042 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[180c5f46-d45f-4e28-99e1-c56764938099] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 21:18:06 GMT]] Body:0xc000afd5c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00045dcc0 TLS:<nil>}
I0904 21:18:06.159230  440042 retry.go:31] will retry after 17.958142779s: Temporary Error: unexpected response code: 503
I0904 21:18:24.120734  440042 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[998f4569-00ab-48d1-b951-6b97a27b7f1e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 21:18:24 GMT]] Body:0xc000afd640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005c23c0 TLS:<nil>}
I0904 21:18:24.120831  440042 retry.go:31] will retry after 26.250649282s: Temporary Error: unexpected response code: 503
I0904 21:18:50.375812  440042 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f44a1abd-3cea-4d67-a3ff-bb4b8dcee920] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 21:18:50 GMT]] Body:0xc001894680 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0018a0f00 TLS:<nil>}
I0904 21:18:50.375910  440042 retry.go:31] will retry after 37.850368957s: Temporary Error: unexpected response code: 503
I0904 21:19:28.230558  440042 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5e93c2e2-9fb8-4d8b-aba5-c73308c27fee] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 21:19:28 GMT]] Body:0xc001894740 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0018a1040 TLS:<nil>}
I0904 21:19:28.230636  440042 retry.go:31] will retry after 32.447529803s: Temporary Error: unexpected response code: 503
I0904 21:20:00.681960  440042 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[48be9c0a-f56a-446d-9ca6-b3de63a3e242] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 21:20:00 GMT]] Body:0xc000d22080 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0018a0000 TLS:<nil>}
I0904 21:20:00.682034  440042 retry.go:31] will retry after 1m21.280112011s: Temporary Error: unexpected response code: 503
I0904 21:21:21.966219  440042 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f61dded5-2bff-421b-97ec-dae09de5757f] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 21:21:21 GMT]] Body:0xc000d22180 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005c2000 TLS:<nil>}
I0904 21:21:21.966313  440042 retry.go:31] will retry after 48.699692005s: Temporary Error: unexpected response code: 503
I0904 21:22:10.669302  440042 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f1236eea-50ae-4ed4-91dc-54cc61a36eee] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 21:22:10 GMT]] Body:0xc000afc080 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005c2140 TLS:<nil>}
I0904 21:22:10.669397  440042 retry.go:31] will retry after 1m6.049356496s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-434682
helpers_test.go:243: (dbg) docker inspect functional-434682:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e",
	        "Created": "2025-09-04T21:07:38.362965102Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 421064,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-04T21:07:38.3914292Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:26724cb1013292a869503ea8c35fa5a932e6ec3e4e85e318411373ae6a24478b",
	        "ResolvConfPath": "/var/lib/docker/containers/c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e/hostname",
	        "HostsPath": "/var/lib/docker/containers/c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e/hosts",
	        "LogPath": "/var/lib/docker/containers/c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e/c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e-json.log",
	        "Name": "/functional-434682",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-434682:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-434682",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e",
	                "LowerDir": "/var/lib/docker/overlay2/eb0bbf1c5f29be9a012439922fbd83d9a4ab23f661af8d9d69d65583c4b8f686-init/diff:/var/lib/docker/overlay2/fcc29a201369e5fb579bfafdafe90606e5b86bd2f4b52beb7f6a97f7e955214d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eb0bbf1c5f29be9a012439922fbd83d9a4ab23f661af8d9d69d65583c4b8f686/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eb0bbf1c5f29be9a012439922fbd83d9a4ab23f661af8d9d69d65583c4b8f686/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eb0bbf1c5f29be9a012439922fbd83d9a4ab23f661af8d9d69d65583c4b8f686/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-434682",
	                "Source": "/var/lib/docker/volumes/functional-434682/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-434682",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-434682",
	                "name.minikube.sigs.k8s.io": "functional-434682",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5a1ee1974cafddfa91d00d9aacf8ecbbf723cb04b47fdc840a7a8d178cf57558",
	            "SandboxKey": "/var/run/docker/netns/5a1ee1974caf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33155"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33156"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33157"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-434682": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5e:eb:a6:0f:f0:cd",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e2d7bc8acf0e9f0624cc76f4cbe69fbd7f4637588b37e979a792472035792fd9",
	                    "EndpointID": "9900ed97e30e5355fe09aab9ccab90615da0bf4734544f78893d6f734e005f15",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-434682",
	                        "c103d7054280"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-434682 -n functional-434682
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-434682 logs -n 25: (1.33350951s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                       ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-434682 ssh -- ls -la /mount-9p                                                                         │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │ 04 Sep 25 21:17 UTC │
	│ ssh            │ functional-434682 ssh sudo umount -f /mount-9p                                                                    │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │                     │
	│ mount          │ -p functional-434682 /tmp/TestFunctionalparallelMountCmdVerifyCleanup251258155/001:/mount1 --alsologtostderr -v=1 │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │                     │
	│ ssh            │ functional-434682 ssh findmnt -T /mount1                                                                          │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │                     │
	│ mount          │ -p functional-434682 /tmp/TestFunctionalparallelMountCmdVerifyCleanup251258155/001:/mount2 --alsologtostderr -v=1 │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │                     │
	│ mount          │ -p functional-434682 /tmp/TestFunctionalparallelMountCmdVerifyCleanup251258155/001:/mount3 --alsologtostderr -v=1 │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │                     │
	│ ssh            │ functional-434682 ssh findmnt -T /mount1                                                                          │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │ 04 Sep 25 21:17 UTC │
	│ ssh            │ functional-434682 ssh findmnt -T /mount2                                                                          │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │ 04 Sep 25 21:17 UTC │
	│ ssh            │ functional-434682 ssh findmnt -T /mount3                                                                          │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │ 04 Sep 25 21:17 UTC │
	│ mount          │ -p functional-434682 --kill=true                                                                                  │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │                     │
	│ license        │                                                                                                                   │ minikube          │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │ 04 Sep 25 21:17 UTC │
	│ dashboard      │ --url --port 36195 -p functional-434682 --alsologtostderr -v=1                                                    │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │                     │
	│ start          │ -p functional-434682 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio         │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:20 UTC │                     │
	│ start          │ -p functional-434682 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                   │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:20 UTC │                     │
	│ start          │ -p functional-434682 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio         │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:20 UTC │                     │
	│ update-context │ functional-434682 update-context --alsologtostderr -v=2                                                           │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:20 UTC │ 04 Sep 25 21:20 UTC │
	│ update-context │ functional-434682 update-context --alsologtostderr -v=2                                                           │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:20 UTC │ 04 Sep 25 21:20 UTC │
	│ update-context │ functional-434682 update-context --alsologtostderr -v=2                                                           │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:20 UTC │ 04 Sep 25 21:20 UTC │
	│ image          │ functional-434682 image ls --format short --alsologtostderr                                                       │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:20 UTC │ 04 Sep 25 21:20 UTC │
	│ image          │ functional-434682 image ls --format yaml --alsologtostderr                                                        │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:20 UTC │ 04 Sep 25 21:20 UTC │
	│ ssh            │ functional-434682 ssh pgrep buildkitd                                                                             │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:20 UTC │                     │
	│ image          │ functional-434682 image build -t localhost/my-image:functional-434682 testdata/build --alsologtostderr            │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:20 UTC │ 04 Sep 25 21:20 UTC │
	│ image          │ functional-434682 image ls                                                                                        │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:20 UTC │ 04 Sep 25 21:20 UTC │
	│ image          │ functional-434682 image ls --format json --alsologtostderr                                                        │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:20 UTC │ 04 Sep 25 21:20 UTC │
	│ image          │ functional-434682 image ls --format table --alsologtostderr                                                       │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:20 UTC │ 04 Sep 25 21:20 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 21:20:04
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 21:20:04.993040  441830 out.go:360] Setting OutFile to fd 1 ...
	I0904 21:20:04.993174  441830 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:20:04.993196  441830 out.go:374] Setting ErrFile to fd 2...
	I0904 21:20:04.993204  441830 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:20:04.993516  441830 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-384635/.minikube/bin
	I0904 21:20:04.994106  441830 out.go:368] Setting JSON to false
	I0904 21:20:04.995202  441830 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10954,"bootTime":1757009851,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 21:20:04.995343  441830 start.go:140] virtualization: kvm guest
	I0904 21:20:04.997427  441830 out.go:179] * [functional-434682] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	I0904 21:20:04.999082  441830 out.go:179]   - MINIKUBE_LOCATION=21490
	I0904 21:20:04.999117  441830 notify.go:220] Checking for updates...
	I0904 21:20:05.001703  441830 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 21:20:05.002997  441830 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21490-384635/kubeconfig
	I0904 21:20:05.004192  441830 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-384635/.minikube
	I0904 21:20:05.005525  441830 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 21:20:05.006818  441830 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 21:20:05.008411  441830 config.go:182] Loaded profile config "functional-434682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 21:20:05.008953  441830 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 21:20:05.029798  441830 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 21:20:05.029929  441830 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 21:20:05.076670  441830 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-09-04 21:20:05.067863181 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 21:20:05.076787  441830 docker.go:318] overlay module found
	I0904 21:20:05.079775  441830 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0904 21:20:05.081002  441830 start.go:304] selected driver: docker
	I0904 21:20:05.081019  441830 start.go:918] validating driver "docker" against &{Name:functional-434682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-434682 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 21:20:05.081107  441830 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 21:20:05.083069  441830 out.go:203] 
	W0904 21:20:05.084193  441830 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0904 21:20:05.085390  441830 out.go:203] 
	
	
	==> CRI-O <==
	Sep 04 21:21:24 functional-434682 crio[4932]: time="2025-09-04 21:21:24.146681167Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=3ed85dbe-4cd8-43b7-a72b-664c1ea28fa4 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:21:24 functional-434682 crio[4932]: time="2025-09-04 21:21:24.146912051Z" level=info msg="Image docker.io/nginx:alpine not found" id=3ed85dbe-4cd8-43b7-a72b-664c1ea28fa4 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:21:37 functional-434682 crio[4932]: time="2025-09-04 21:21:37.146819806Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=7e3d4160-381b-4f9f-a116-2668993708fd name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:21:37 functional-434682 crio[4932]: time="2025-09-04 21:21:37.147068002Z" level=info msg="Image docker.io/nginx:alpine not found" id=7e3d4160-381b-4f9f-a116-2668993708fd name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:21:37 functional-434682 crio[4932]: time="2025-09-04 21:21:37.614807636Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=88e2b50d-e915-48a3-a99f-a28fd9171598 name=/runtime.v1.ImageService/PullImage
	Sep 04 21:21:37 functional-434682 crio[4932]: time="2025-09-04 21:21:37.615537124Z" level=info msg="Pulling image: docker.io/mysql:5.7" id=10043128-e0b7-4a4d-8c7e-85224bc5bdfe name=/runtime.v1.ImageService/PullImage
	Sep 04 21:21:37 functional-434682 crio[4932]: time="2025-09-04 21:21:37.616866043Z" level=info msg="Trying to access \"docker.io/library/mysql:5.7\""
	Sep 04 21:21:49 functional-434682 crio[4932]: time="2025-09-04 21:21:49.146359017Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=17ebb9c3-d2b7-4839-84ed-7ad9ee08bf63 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:21:49 functional-434682 crio[4932]: time="2025-09-04 21:21:49.146647593Z" level=info msg="Image docker.io/nginx:alpine not found" id=17ebb9c3-d2b7-4839-84ed-7ad9ee08bf63 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:21:50 functional-434682 crio[4932]: time="2025-09-04 21:21:50.146624002Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=4e383d92-1efe-4afa-a98f-2b0c4e6fe25d name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:21:50 functional-434682 crio[4932]: time="2025-09-04 21:21:50.146861828Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=4e383d92-1efe-4afa-a98f-2b0c4e6fe25d name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:22:00 functional-434682 crio[4932]: time="2025-09-04 21:22:00.146173992Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=7e7767c9-9068-479d-b259-692e94cbe9f8 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:22:00 functional-434682 crio[4932]: time="2025-09-04 21:22:00.146420516Z" level=info msg="Image docker.io/nginx:alpine not found" id=7e7767c9-9068-479d-b259-692e94cbe9f8 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:22:03 functional-434682 crio[4932]: time="2025-09-04 21:22:03.147053319Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=faab80ab-db40-4be8-9c41-055562f850f6 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:22:03 functional-434682 crio[4932]: time="2025-09-04 21:22:03.147424455Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=faab80ab-db40-4be8-9c41-055562f850f6 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:22:11 functional-434682 crio[4932]: time="2025-09-04 21:22:11.146811937Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=130da814-e0b4-4792-a0dd-adcc93d5baac name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:22:11 functional-434682 crio[4932]: time="2025-09-04 21:22:11.147054930Z" level=info msg="Image docker.io/nginx:alpine not found" id=130da814-e0b4-4792-a0dd-adcc93d5baac name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:22:22 functional-434682 crio[4932]: time="2025-09-04 21:22:22.695114024Z" level=info msg="Pulling image: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=21b14311-29f1-4b62-a8a2-bfcb1873031a name=/runtime.v1.ImageService/PullImage
	Sep 04 21:22:22 functional-434682 crio[4932]: time="2025-09-04 21:22:22.699529025Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Sep 04 21:22:24 functional-434682 crio[4932]: time="2025-09-04 21:22:24.147134539Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=20308865-aea9-4876-9712-5a295592ce28 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:22:24 functional-434682 crio[4932]: time="2025-09-04 21:22:24.147431888Z" level=info msg="Image docker.io/nginx:alpine not found" id=20308865-aea9-4876-9712-5a295592ce28 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:22:35 functional-434682 crio[4932]: time="2025-09-04 21:22:35.147136094Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=a7066c30-9838-40a7-9ca6-70de0b5694d1 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:22:35 functional-434682 crio[4932]: time="2025-09-04 21:22:35.147397015Z" level=info msg="Image docker.io/mysql:5.7 not found" id=a7066c30-9838-40a7-9ca6-70de0b5694d1 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:22:39 functional-434682 crio[4932]: time="2025-09-04 21:22:39.146646617Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=6be9649f-535f-42f4-94e0-41aa4e2e7a42 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:22:39 functional-434682 crio[4932]: time="2025-09-04 21:22:39.146848833Z" level=info msg="Image docker.io/nginx:alpine not found" id=6be9649f-535f-42f4-94e0-41aa4e2e7a42 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7573ae495fc6e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 minutes ago       Exited              mount-munger              0                   67f5c151d8c01       busybox-mount
	67e1ba685c7ce       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      13 minutes ago      Running             coredns                   2                   59a2b1cb9d90f       coredns-66bc5c9577-dcjtm
	02a541cc04bce       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      13 minutes ago      Running             kube-proxy                2                   bf24b3b007d85       kube-proxy-kjb6r
	c73eddf3856d6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       3                   83ab70c3927ef       storage-provisioner
	5f0798737b0ee       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      13 minutes ago      Running             kindnet-cni               2                   802a7e58a9fc4       kindnet-6t97w
	d1ff769d1c0c6       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                      13 minutes ago      Running             kube-apiserver            0                   765688e92610c       kube-apiserver-functional-434682
	48f5af473405f       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      13 minutes ago      Running             kube-controller-manager   2                   473b1b2117910       kube-controller-manager-functional-434682
	dcfbf9517d0ea       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      13 minutes ago      Running             kube-scheduler            2                   3d774d9c25044       kube-scheduler-functional-434682
	8957f84301c86       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      13 minutes ago      Running             etcd                      2                   6d1a127f36d21       etcd-functional-434682
	9aae164b6b622       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   83ab70c3927ef       storage-provisioner
	d1bf470c95037       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      13 minutes ago      Exited              coredns                   1                   59a2b1cb9d90f       coredns-66bc5c9577-dcjtm
	0fd149fb30af7       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      13 minutes ago      Exited              kindnet-cni               1                   802a7e58a9fc4       kindnet-6t97w
	147b1660e0ddf       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      13 minutes ago      Exited              etcd                      1                   6d1a127f36d21       etcd-functional-434682
	64aaae8d657da       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      13 minutes ago      Exited              kube-proxy                1                   bf24b3b007d85       kube-proxy-kjb6r
	753221e723ff2       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      13 minutes ago      Exited              kube-scheduler            1                   3d774d9c25044       kube-scheduler-functional-434682
	279dbefad3613       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      13 minutes ago      Exited              kube-controller-manager   1                   473b1b2117910       kube-controller-manager-functional-434682
	
	
	==> coredns [67e1ba685c7ce4de1937d4603c36309e97a95bbef649db8b26cebb6ca66c20eb] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46904 - 52148 "HINFO IN 4965619897574769528.1949479442252656424. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.070291104s
	
	
	==> coredns [d1bf470c950376569202b921adaecbf97801a2ebfe9fffcfc07500259775d103] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40539 - 31795 "HINFO IN 6716744690789338561.844443993120435347. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.092547911s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-434682
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-434682
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d82f852837f628b3930700b19196c39855cd258a
	                    minikube.k8s.io/name=functional-434682
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_04T21_07_53_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Sep 2025 21:07:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-434682
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Sep 2025 21:22:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Sep 2025 21:20:43 +0000   Thu, 04 Sep 2025 21:07:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Sep 2025 21:20:43 +0000   Thu, 04 Sep 2025 21:07:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Sep 2025 21:20:43 +0000   Thu, 04 Sep 2025 21:07:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Sep 2025 21:20:43 +0000   Thu, 04 Sep 2025 21:08:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-434682
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	System Info:
	  Machine ID:                 01aa6837660246f486ffcb896223620f
	  System UUID:                41151989-98b1-4281-8655-a34190cf40fb
	  Boot ID:                    d34ed5fc-a148-45de-9a0e-f744d5f792e8
	  Kernel Version:             5.15.0-1083-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-8n82x                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m1s
	  default                     hello-node-connect-7d85dfc575-84wxx           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     mysql-5bb876957f-wzh2r                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     12m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-66bc5c9577-dcjtm                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     14m
	  kube-system                 etcd-functional-434682                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         14m
	  kube-system                 kindnet-6t97w                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-functional-434682              250m (3%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-functional-434682     200m (2%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-kjb6r                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-functional-434682              100m (1%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-xj66s    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-5nqcb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 14m                kube-proxy       
	  Normal   Starting                 13m                kube-proxy       
	  Normal   Starting                 13m                kube-proxy       
	  Warning  CgroupV1                 14m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  14m                kubelet          Node functional-434682 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m                kubelet          Node functional-434682 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m                kubelet          Node functional-434682 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           14m                node-controller  Node functional-434682 event: Registered Node functional-434682 in Controller
	  Normal   NodeReady                14m                kubelet          Node functional-434682 status is now: NodeReady
	  Normal   RegisteredNode           13m                node-controller  Node functional-434682 event: Registered Node functional-434682 in Controller
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node functional-434682 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 13m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node functional-434682 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x8 over 13m)  kubelet          Node functional-434682 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                node-controller  Node functional-434682 event: Registered Node functional-434682 in Controller
	
	
	==> dmesg <==
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000001] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +2.015727] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000007] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +0.000000] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000003] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +4.127589] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000006] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +0.000000] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000002] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +8.191103] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000006] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +0.000017] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000002] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[Sep 4 21:17] FS-Cache: Duplicate cookie detected
	[  +0.004735] FS-Cache: O-cookie c=00000024 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006741] FS-Cache: O-cookie d=00000000e536cbeb{9P.session} n=0000000049c43b6f
	[  +0.007557] FS-Cache: O-key=[10] '34323937353933383032'
	[  +0.005349] FS-Cache: N-cookie c=00000025 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006948] FS-Cache: N-cookie d=00000000e536cbeb{9P.session} n=000000005af789fc
	[  +0.008907] FS-Cache: N-key=[10] '34323937353933383032'
	
	
	==> etcd [147b1660e0ddf97548b185775163fa70312e80a72b723002562b6c48722dc082] <==
	{"level":"warn","ts":"2025-09-04T21:08:53.582396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:08:53.597054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:08:53.603187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:08:53.677082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:08:53.684274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:08:53.691194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:08:53.754454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36856","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-04T21:09:17.209581Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-04T21:09:17.209688Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-434682","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-04T21:09:17.209779Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-04T21:09:17.488998Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-04T21:09:17.490450Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-04T21:09:17.490506Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-09-04T21:09:17.490516Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-04T21:09:17.490584Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-09-04T21:09:17.490587Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"error","ts":"2025-09-04T21:09:17.490593Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-04T21:09:17.490524Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-04T21:09:17.490611Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-04T21:09:17.490618Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-04T21:09:17.490589Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-04T21:09:17.493741Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-04T21:09:17.493803Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-04T21:09:17.493833Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-04T21:09:17.493842Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-434682","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [8957f84301c86329b1ba346dc71da62fb29a19e0e4117e03b663512e68b198f2] <==
	{"level":"warn","ts":"2025-09-04T21:09:37.287614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.294726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.349193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.355340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.362426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.368780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.388971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.394843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.401550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.407730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.413741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.420262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.452212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.460028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.467015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.472974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.479382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.486266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.523417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.544995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.551759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.607583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43872","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-04T21:19:36.795428Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1015}
	{"level":"info","ts":"2025-09-04T21:19:36.813729Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1015,"took":"17.971289ms","hash":913127590,"current-db-size-bytes":3592192,"current-db-size":"3.6 MB","current-db-size-in-use-bytes":1683456,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-09-04T21:19:36.813774Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":913127590,"revision":1015,"compact-revision":-1}
	
	
	==> kernel <==
	 21:22:41 up  3:05,  0 users,  load average: 0.19, 0.31, 0.41
	Linux functional-434682 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [0fd149fb30af74d582046fc60a9f80ce2cd48cec39f94992069066c32e3c7cb2] <==
	I0904 21:08:51.952269       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0904 21:08:51.952686       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0904 21:08:51.952925       1 main.go:148] setting mtu 1500 for CNI 
	I0904 21:08:51.952942       1 main.go:178] kindnetd IP family: "ipv4"
	I0904 21:08:51.952956       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-04T21:08:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0904 21:08:52.247356       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0904 21:08:52.247435       1 controller.go:381] "Waiting for informer caches to sync"
	I0904 21:08:52.247473       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0904 21:08:52.347068       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0904 21:08:54.552811       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"kindnet\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I0904 21:08:55.448192       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0904 21:08:55.448221       1 metrics.go:72] Registering metrics
	I0904 21:08:55.448274       1 controller.go:711] "Syncing nftables rules"
	I0904 21:09:02.247716       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:09:02.247793       1 main.go:301] handling current node
	I0904 21:09:12.248838       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:09:12.248888       1 main.go:301] handling current node
	
	
	==> kindnet [5f0798737b0ee775905f7ed84860ba0292fcc66d918bbe2bed6f39454e2ef304] <==
	I0904 21:20:39.949636       1 main.go:301] handling current node
	I0904 21:20:49.949303       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:20:49.949375       1 main.go:301] handling current node
	I0904 21:20:59.954323       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:20:59.954357       1 main.go:301] handling current node
	I0904 21:21:09.949931       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:21:09.949973       1 main.go:301] handling current node
	I0904 21:21:19.949631       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:21:19.949687       1 main.go:301] handling current node
	I0904 21:21:29.956268       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:21:29.956310       1 main.go:301] handling current node
	I0904 21:21:39.949194       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:21:39.949250       1 main.go:301] handling current node
	I0904 21:21:49.949380       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:21:49.949421       1 main.go:301] handling current node
	I0904 21:21:59.952396       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:21:59.952430       1 main.go:301] handling current node
	I0904 21:22:09.949135       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:22:09.949168       1 main.go:301] handling current node
	I0904 21:22:19.954743       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:22:19.954775       1 main.go:301] handling current node
	I0904 21:22:29.956839       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:22:29.956872       1 main.go:301] handling current node
	I0904 21:22:39.949267       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:22:39.949299       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d1ff769d1c0c6befb61371579e156cb8df3f874152f997a641d98dbfa7a31c3d] <==
	I0904 21:10:08.560431       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.105.2.142"}
	I0904 21:10:42.059430       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:10:59.832577       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:11:46.403941       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:12:18.795065       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:12:56.359284       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:13:46.928701       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:13:59.740504       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:14:57.548567       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:15:16.202697       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:15:40.741421       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.152.103"}
	I0904 21:16:21.853186       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:16:34.347299       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:17:27.098406       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:17:41.429731       1 controller.go:667] quota admission added evaluator for: namespaces
	I0904 21:17:41.566220       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.253.138"}
	I0904 21:17:41.578326       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.133.102"}
	I0904 21:17:53.583223       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:18:44.345042       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:19:10.962589       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:19:38.264685       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0904 21:20:14.075383       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:20:27.005900       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:21:30.310659       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:21:44.413228       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [279dbefad36138dca3e8b3083d9738f2befb8c0fdb16fc5221ce1d1045032b84] <==
	I0904 21:08:57.719051       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0904 21:08:57.744869       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0904 21:08:57.744894       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0904 21:08:57.744915       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0904 21:08:57.767124       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0904 21:08:57.767137       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0904 21:08:57.767170       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0904 21:08:57.767197       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0904 21:08:57.767206       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0904 21:08:57.767231       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0904 21:08:57.768391       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0904 21:08:57.770586       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0904 21:08:57.770622       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0904 21:08:57.770629       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0904 21:08:57.770710       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0904 21:08:57.770718       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0904 21:08:57.770747       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0904 21:08:57.770754       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0904 21:08:57.770759       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0904 21:08:57.771926       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0904 21:08:57.773107       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0904 21:08:57.775353       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0904 21:08:57.775454       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0904 21:08:57.779630       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0904 21:08:57.793917       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [48f5af473405f59b41d3ffd7c662a3d9baaa4584f9ffb7005f35ae65089b9947] <==
	I0904 21:09:41.654206       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0904 21:09:41.654231       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0904 21:09:41.654293       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0904 21:09:41.654357       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0904 21:09:41.654362       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-434682"
	I0904 21:09:41.654434       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0904 21:09:41.655382       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0904 21:09:41.656502       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0904 21:09:41.657593       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0904 21:09:41.658719       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0904 21:09:41.659792       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0904 21:09:41.659814       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0904 21:09:41.660983       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0904 21:09:41.663240       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0904 21:09:41.663330       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0904 21:09:41.663339       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0904 21:09:41.664527       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0904 21:09:41.665730       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0904 21:09:41.671737       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0904 21:17:41.471820       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0904 21:17:41.476246       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0904 21:17:41.477478       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0904 21:17:41.479895       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0904 21:17:41.481746       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0904 21:17:41.486362       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [02a541cc04bcecd4ac0c7cca2da8d9603b29a892bdad1bb5632d1ac58ce87821] <==
	I0904 21:09:39.581491       1 server_linux.go:53] "Using iptables proxy"
	I0904 21:09:39.698014       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0904 21:09:39.798964       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0904 21:09:39.799006       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0904 21:09:39.799149       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 21:09:39.820088       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0904 21:09:39.820137       1 server_linux.go:132] "Using iptables Proxier"
	I0904 21:09:39.824521       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 21:09:39.824875       1 server.go:527] "Version info" version="v1.34.0"
	I0904 21:09:39.824897       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 21:09:39.826297       1 config.go:200] "Starting service config controller"
	I0904 21:09:39.826311       1 config.go:106] "Starting endpoint slice config controller"
	I0904 21:09:39.826324       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0904 21:09:39.826334       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0904 21:09:39.826364       1 config.go:403] "Starting serviceCIDR config controller"
	I0904 21:09:39.826370       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0904 21:09:39.826388       1 config.go:309] "Starting node config controller"
	I0904 21:09:39.826393       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0904 21:09:39.826400       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0904 21:09:39.927396       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0904 21:09:39.927434       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0904 21:09:39.927442       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [64aaae8d657daf42d90854673aa7ce9152f3f2314bae53b22f99da336581d403] <==
	I0904 21:08:51.870065       1 server_linux.go:53] "Using iptables proxy"
	I0904 21:08:52.166652       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0904 21:08:54.551281       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"functional-434682\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0904 21:08:56.067212       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0904 21:08:56.067257       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0904 21:08:56.067329       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 21:08:56.086742       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0904 21:08:56.086795       1 server_linux.go:132] "Using iptables Proxier"
	I0904 21:08:56.091114       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 21:08:56.091647       1 server.go:527] "Version info" version="v1.34.0"
	I0904 21:08:56.091678       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 21:08:56.092896       1 config.go:200] "Starting service config controller"
	I0904 21:08:56.092923       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0904 21:08:56.092933       1 config.go:106] "Starting endpoint slice config controller"
	I0904 21:08:56.092944       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0904 21:08:56.092964       1 config.go:403] "Starting serviceCIDR config controller"
	I0904 21:08:56.093000       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0904 21:08:56.093028       1 config.go:309] "Starting node config controller"
	I0904 21:08:56.093041       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0904 21:08:56.093051       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0904 21:08:56.193435       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0904 21:08:56.193472       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0904 21:08:56.193511       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [753221e723ff279278be4081dd108fed0fe299d51e26bf89fcbd7b19210b8ee2] <==
	I0904 21:08:52.682221       1 serving.go:386] Generated self-signed cert in-memory
	W0904 21:08:54.347449       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0904 21:08:54.347601       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0904 21:08:54.347674       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0904 21:08:54.347716       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0904 21:08:54.460261       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0904 21:08:54.460303       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 21:08:54.467172       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0904 21:08:54.467450       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 21:08:54.467475       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 21:08:54.467560       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0904 21:08:54.553912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0904 21:08:54.553960       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0904 21:08:54.557079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I0904 21:08:54.567733       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 21:09:17.210279       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0904 21:09:17.210329       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0904 21:09:17.210361       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0904 21:09:17.210412       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 21:09:17.210620       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0904 21:09:17.210645       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [dcfbf9517d0ea85a697b2576bca9726545016d4d2a7bd6c9ba9e771af9db338e] <==
	I0904 21:09:36.468599       1 serving.go:386] Generated self-signed cert in-memory
	W0904 21:09:38.190339       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0904 21:09:38.190580       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0904 21:09:38.190649       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0904 21:09:38.190684       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0904 21:09:38.367009       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0904 21:09:38.370007       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 21:09:38.372630       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 21:09:38.372665       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 21:09:38.373087       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0904 21:09:38.373496       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0904 21:09:38.472872       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 04 21:22:24 functional-434682 kubelet[5296]: E0904 21:22:24.147779    5296 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="1ec180eb-eb78-40a3-aab9-f321efb0233d"
	Sep 04 21:22:25 functional-434682 kubelet[5296]: E0904 21:22:25.385476    5296 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757020945385264272  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200836}  inodes_used:{value:104}}"
	Sep 04 21:22:25 functional-434682 kubelet[5296]: E0904 21:22:25.385512    5296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757020945385264272  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200836}  inodes_used:{value:104}}"
	Sep 04 21:22:30 functional-434682 kubelet[5296]: E0904 21:22:30.146780    5296 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-8n82x" podUID="795d04f0-d02a-434e-a4ab-297ee10360de"
	Sep 04 21:22:35 functional-434682 kubelet[5296]: E0904 21:22:35.147664    5296 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-wzh2r" podUID="a63863a4-7fbe-4d12-b15a-fcfb930c1a96"
	Sep 04 21:22:35 functional-434682 kubelet[5296]: E0904 21:22:35.276107    5296 manager.go:1116] Failed to create existing container: /crio-bf24b3b007d859a7cfa0a5ca7264ba7f9516fbbd22c33d1533bcd5c90c059a58: Error finding container bf24b3b007d859a7cfa0a5ca7264ba7f9516fbbd22c33d1533bcd5c90c059a58: Status 404 returned error can't find the container with id bf24b3b007d859a7cfa0a5ca7264ba7f9516fbbd22c33d1533bcd5c90c059a58
	Sep 04 21:22:35 functional-434682 kubelet[5296]: E0904 21:22:35.276364    5296 manager.go:1116] Failed to create existing container: /crio-3d774d9c25044197ce37f59d6a35a5a6eb020558407253f05041b4c74a4823da: Error finding container 3d774d9c25044197ce37f59d6a35a5a6eb020558407253f05041b4c74a4823da: Status 404 returned error can't find the container with id 3d774d9c25044197ce37f59d6a35a5a6eb020558407253f05041b4c74a4823da
	Sep 04 21:22:35 functional-434682 kubelet[5296]: E0904 21:22:35.276571    5296 manager.go:1116] Failed to create existing container: /docker/c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e/crio-bdd53279ceb838b2012b9c8273f6ef97ffe695a25ae5343047ad5e1d925f67d2: Error finding container bdd53279ceb838b2012b9c8273f6ef97ffe695a25ae5343047ad5e1d925f67d2: Status 404 returned error can't find the container with id bdd53279ceb838b2012b9c8273f6ef97ffe695a25ae5343047ad5e1d925f67d2
	Sep 04 21:22:35 functional-434682 kubelet[5296]: E0904 21:22:35.276777    5296 manager.go:1116] Failed to create existing container: /crio-83ab70c3927ef253fa508910456d1ab9921a8c3d28db082f4300dc67b29a9afb: Error finding container 83ab70c3927ef253fa508910456d1ab9921a8c3d28db082f4300dc67b29a9afb: Status 404 returned error can't find the container with id 83ab70c3927ef253fa508910456d1ab9921a8c3d28db082f4300dc67b29a9afb
	Sep 04 21:22:35 functional-434682 kubelet[5296]: E0904 21:22:35.276985    5296 manager.go:1116] Failed to create existing container: /crio-473b1b2117910f3427d7146f0bf589c4eb7ff9584f7024ab9801389a70b6fa29: Error finding container 473b1b2117910f3427d7146f0bf589c4eb7ff9584f7024ab9801389a70b6fa29: Status 404 returned error can't find the container with id 473b1b2117910f3427d7146f0bf589c4eb7ff9584f7024ab9801389a70b6fa29
	Sep 04 21:22:35 functional-434682 kubelet[5296]: E0904 21:22:35.277157    5296 manager.go:1116] Failed to create existing container: /docker/c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e/crio-59a2b1cb9d90fca20ac3fb152dcd0c4fbbc8c485fba72d7f3fd024b71a2eb05c: Error finding container 59a2b1cb9d90fca20ac3fb152dcd0c4fbbc8c485fba72d7f3fd024b71a2eb05c: Status 404 returned error can't find the container with id 59a2b1cb9d90fca20ac3fb152dcd0c4fbbc8c485fba72d7f3fd024b71a2eb05c
	Sep 04 21:22:35 functional-434682 kubelet[5296]: E0904 21:22:35.277360    5296 manager.go:1116] Failed to create existing container: /docker/c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e/crio-83ab70c3927ef253fa508910456d1ab9921a8c3d28db082f4300dc67b29a9afb: Error finding container 83ab70c3927ef253fa508910456d1ab9921a8c3d28db082f4300dc67b29a9afb: Status 404 returned error can't find the container with id 83ab70c3927ef253fa508910456d1ab9921a8c3d28db082f4300dc67b29a9afb
	Sep 04 21:22:35 functional-434682 kubelet[5296]: E0904 21:22:35.277568    5296 manager.go:1116] Failed to create existing container: /crio-802a7e58a9fc422680d44b60e8cd9a9f345b52efcad3aa6f33c4b1af0a7d5ee2: Error finding container 802a7e58a9fc422680d44b60e8cd9a9f345b52efcad3aa6f33c4b1af0a7d5ee2: Status 404 returned error can't find the container with id 802a7e58a9fc422680d44b60e8cd9a9f345b52efcad3aa6f33c4b1af0a7d5ee2
	Sep 04 21:22:35 functional-434682 kubelet[5296]: E0904 21:22:35.277739    5296 manager.go:1116] Failed to create existing container: /crio-6d1a127f36d2119ad4fd32598c22f59cc8794d4d1ecb22b01dffad1d88365f86: Error finding container 6d1a127f36d2119ad4fd32598c22f59cc8794d4d1ecb22b01dffad1d88365f86: Status 404 returned error can't find the container with id 6d1a127f36d2119ad4fd32598c22f59cc8794d4d1ecb22b01dffad1d88365f86
	Sep 04 21:22:35 functional-434682 kubelet[5296]: E0904 21:22:35.277915    5296 manager.go:1116] Failed to create existing container: /docker/c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e/crio-3d774d9c25044197ce37f59d6a35a5a6eb020558407253f05041b4c74a4823da: Error finding container 3d774d9c25044197ce37f59d6a35a5a6eb020558407253f05041b4c74a4823da: Status 404 returned error can't find the container with id 3d774d9c25044197ce37f59d6a35a5a6eb020558407253f05041b4c74a4823da
	Sep 04 21:22:35 functional-434682 kubelet[5296]: E0904 21:22:35.278120    5296 manager.go:1116] Failed to create existing container: /crio-59a2b1cb9d90fca20ac3fb152dcd0c4fbbc8c485fba72d7f3fd024b71a2eb05c: Error finding container 59a2b1cb9d90fca20ac3fb152dcd0c4fbbc8c485fba72d7f3fd024b71a2eb05c: Status 404 returned error can't find the container with id 59a2b1cb9d90fca20ac3fb152dcd0c4fbbc8c485fba72d7f3fd024b71a2eb05c
	Sep 04 21:22:35 functional-434682 kubelet[5296]: E0904 21:22:35.278310    5296 manager.go:1116] Failed to create existing container: /docker/c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e/crio-bf24b3b007d859a7cfa0a5ca7264ba7f9516fbbd22c33d1533bcd5c90c059a58: Error finding container bf24b3b007d859a7cfa0a5ca7264ba7f9516fbbd22c33d1533bcd5c90c059a58: Status 404 returned error can't find the container with id bf24b3b007d859a7cfa0a5ca7264ba7f9516fbbd22c33d1533bcd5c90c059a58
	Sep 04 21:22:35 functional-434682 kubelet[5296]: E0904 21:22:35.278496    5296 manager.go:1116] Failed to create existing container: /docker/c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e/crio-473b1b2117910f3427d7146f0bf589c4eb7ff9584f7024ab9801389a70b6fa29: Error finding container 473b1b2117910f3427d7146f0bf589c4eb7ff9584f7024ab9801389a70b6fa29: Status 404 returned error can't find the container with id 473b1b2117910f3427d7146f0bf589c4eb7ff9584f7024ab9801389a70b6fa29
	Sep 04 21:22:35 functional-434682 kubelet[5296]: E0904 21:22:35.278650    5296 manager.go:1116] Failed to create existing container: /docker/c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e/crio-802a7e58a9fc422680d44b60e8cd9a9f345b52efcad3aa6f33c4b1af0a7d5ee2: Error finding container 802a7e58a9fc422680d44b60e8cd9a9f345b52efcad3aa6f33c4b1af0a7d5ee2: Status 404 returned error can't find the container with id 802a7e58a9fc422680d44b60e8cd9a9f345b52efcad3aa6f33c4b1af0a7d5ee2
	Sep 04 21:22:35 functional-434682 kubelet[5296]: E0904 21:22:35.278816    5296 manager.go:1116] Failed to create existing container: /docker/c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e/crio-6d1a127f36d2119ad4fd32598c22f59cc8794d4d1ecb22b01dffad1d88365f86: Error finding container 6d1a127f36d2119ad4fd32598c22f59cc8794d4d1ecb22b01dffad1d88365f86: Status 404 returned error can't find the container with id 6d1a127f36d2119ad4fd32598c22f59cc8794d4d1ecb22b01dffad1d88365f86
	Sep 04 21:22:35 functional-434682 kubelet[5296]: E0904 21:22:35.279019    5296 manager.go:1116] Failed to create existing container: /crio-bdd53279ceb838b2012b9c8273f6ef97ffe695a25ae5343047ad5e1d925f67d2: Error finding container bdd53279ceb838b2012b9c8273f6ef97ffe695a25ae5343047ad5e1d925f67d2: Status 404 returned error can't find the container with id bdd53279ceb838b2012b9c8273f6ef97ffe695a25ae5343047ad5e1d925f67d2
	Sep 04 21:22:35 functional-434682 kubelet[5296]: E0904 21:22:35.386999    5296 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757020955386796398  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200836}  inodes_used:{value:104}}"
	Sep 04 21:22:35 functional-434682 kubelet[5296]: E0904 21:22:35.387033    5296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757020955386796398  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200836}  inodes_used:{value:104}}"
	Sep 04 21:22:39 functional-434682 kubelet[5296]: E0904 21:22:39.146330    5296 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="041c009e-af9d-4eb6-a22e-20603c327a58"
	Sep 04 21:22:39 functional-434682 kubelet[5296]: E0904 21:22:39.147113    5296 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="1ec180eb-eb78-40a3-aab9-f321efb0233d"
	
	
	==> storage-provisioner [9aae164b6b622e57cfa63c0d10780b1d415a846e61eef71f81fb632601cbf077] <==
	I0904 21:09:06.126015       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0904 21:09:06.133313       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0904 21:09:06.133358       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0904 21:09:06.135228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:09:09.590166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:09:13.849978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c73eddf3856d61a0ae21842a7d5d9054379b12e7b84ee634289addb01adb5957] <==
	W0904 21:22:17.681527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:22:19.683965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:22:19.689382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:22:21.692659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:22:21.696741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:22:23.699554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:22:23.704333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:22:25.707594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:22:25.712613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:22:27.715458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:22:27.719199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:22:29.722194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:22:29.726196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:22:31.729551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:22:31.733558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:22:33.736598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:22:33.740461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:22:35.743414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:22:35.747162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:22:37.749756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:22:37.754437       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:22:39.757193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:22:39.762053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:22:41.764923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:22:41.768874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-434682 -n functional-434682
helpers_test.go:269: (dbg) Run:  kubectl --context functional-434682 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-8n82x hello-node-connect-7d85dfc575-84wxx mysql-5bb876957f-wzh2r nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-xj66s kubernetes-dashboard-855c9754f9-5nqcb
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-434682 describe pod busybox-mount hello-node-75c85bcc94-8n82x hello-node-connect-7d85dfc575-84wxx mysql-5bb876957f-wzh2r nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-xj66s kubernetes-dashboard-855c9754f9-5nqcb
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-434682 describe pod busybox-mount hello-node-75c85bcc94-8n82x hello-node-connect-7d85dfc575-84wxx mysql-5bb876957f-wzh2r nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-xj66s kubernetes-dashboard-855c9754f9-5nqcb: exit status 1 (95.483053ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-434682/192.168.49.2
	Start Time:       Thu, 04 Sep 2025 21:16:14 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://7573ae495fc6e3c79942ee196188587a2b74c897d904477d1231fc3ca6208b33
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 04 Sep 2025 21:17:32 +0000
	      Finished:     Thu, 04 Sep 2025 21:17:32 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h9rcr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-h9rcr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  6m28s  default-scheduler  Successfully assigned default/busybox-mount to functional-434682
	  Normal  Pulling    6m28s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m10s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.225s (1m18.121s including waiting). Image size: 4631262 bytes.
	  Normal  Created    5m10s  kubelet            Created container: mount-munger
	  Normal  Started    5m10s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-8n82x
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-434682/192.168.49.2
	Start Time:       Thu, 04 Sep 2025 21:15:40 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6kdmp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6kdmp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  7m2s                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-8n82x to functional-434682
	  Normal   Pulling    2m45s (x4 over 7m1s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     65s (x4 over 6m12s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     65s (x4 over 6m12s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    12s (x9 over 6m12s)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     12s (x9 over 6m12s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-84wxx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-434682/192.168.49.2
	Start Time:       Thu, 04 Sep 2025 21:10:08 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zbxg8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zbxg8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  12m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-84wxx to functional-434682
	  Normal   Pulling    4m45s (x5 over 12m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     3m38s (x5 over 11m)  kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     3m38s (x5 over 11m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    78s (x22 over 11m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     78s (x22 over 11m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             mysql-5bb876957f-wzh2r
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-434682/192.168.49.2
	Start Time:       Thu, 04 Sep 2025 21:10:02 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hc4jr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hc4jr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  12m                    default-scheduler  Successfully assigned default/mysql-5bb876957f-wzh2r to functional-434682
	  Warning  Failed     12m                    kubelet            Failed to pull image "docker.io/mysql:5.7": initializing source docker://mysql:5.7: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     10m                    kubelet            Failed to pull image "docker.io/mysql:5.7": loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    5m12s (x5 over 12m)    kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     4m39s (x5 over 12m)    kubelet            Error: ErrImagePull
	  Warning  Failed     4m39s (x3 over 8m46s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    2m12s (x22 over 12m)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     2m (x23 over 12m)      kubelet            Error: ImagePullBackOff
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-434682/192.168.49.2
	Start Time:       Thu, 04 Sep 2025 21:10:04 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zhxrs (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zhxrs:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  12m                    default-scheduler  Successfully assigned default/nginx-svc to functional-434682
	  Warning  Failed     6m12s (x3 over 11m)    kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    4m44s (x5 over 12m)    kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2m37s (x5 over 11m)    kubelet            Error: ErrImagePull
	  Warning  Failed     2m37s (x2 over 7m44s)  kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     65s (x17 over 11m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3s (x22 over 11m)      kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-434682/192.168.49.2
	Start Time:       Thu, 04 Sep 2025 21:10:08 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bx7br (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-bx7br:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  12m                  default-scheduler  Successfully assigned default/sp-pod to functional-434682
	  Warning  Failed     5m11s                kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:f15190cd0aed34df2541e6a569d349858dd83fe2a519d7c0ec023133b6d3c4f7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m40s (x5 over 12m)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     96s (x4 over 11m)    kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     96s (x5 over 11m)    kubelet            Error: ErrImagePull
	  Normal   BackOff    3s (x17 over 11m)    kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     3s (x17 over 11m)    kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-xj66s" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-5nqcb" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-434682 describe pod busybox-mount hello-node-75c85bcc94-8n82x hello-node-connect-7d85dfc575-84wxx mysql-5bb876957f-wzh2r nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-xj66s kubernetes-dashboard-855c9754f9-5nqcb: exit status 1
E0904 21:23:12.066623  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:24:35.138614  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- FAIL: TestFunctional/parallel/DashboardCmd (302.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-434682 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-434682 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-84wxx" [859f8e7f-550a-454e-86cf-f3683973631c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-434682 -n functional-434682
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-09-04 21:20:08.86302428 +0000 UTC m=+1485.706626843
functional_test.go:1645: (dbg) Run:  kubectl --context functional-434682 describe po hello-node-connect-7d85dfc575-84wxx -n default
functional_test.go:1645: (dbg) kubectl --context functional-434682 describe po hello-node-connect-7d85dfc575-84wxx -n default:
Name:             hello-node-connect-7d85dfc575-84wxx
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-434682/192.168.49.2
Start Time:       Thu, 04 Sep 2025 21:10:08 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zbxg8 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-zbxg8:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-84wxx to functional-434682
Normal   Pulling    2m11s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     64s (x5 over 8m59s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     64s (x5 over 8m59s)   kubelet            Error: ErrImagePull
Normal   BackOff    13s (x15 over 8m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     13s (x15 over 8m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-434682 logs hello-node-connect-7d85dfc575-84wxx -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-434682 logs hello-node-connect-7d85dfc575-84wxx -n default: exit status 1 (57.994018ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-84wxx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-434682 logs hello-node-connect-7d85dfc575-84wxx -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-434682 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-84wxx
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-434682/192.168.49.2
Start Time:       Thu, 04 Sep 2025 21:10:08 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zbxg8 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-zbxg8:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-84wxx to functional-434682
Normal   Pulling    2m12s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     65s (x5 over 9m)     kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     65s (x5 over 9m)     kubelet            Error: ErrImagePull
Normal   BackOff    14s (x15 over 9m)    kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     14s (x15 over 9m)    kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-434682 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-434682 logs -l app=hello-node-connect: exit status 1 (58.468927ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-84wxx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-434682 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-434682 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.105.2.142
IPs:                      10.105.2.142
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30384/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-434682
helpers_test.go:243: (dbg) docker inspect functional-434682:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e",
	        "Created": "2025-09-04T21:07:38.362965102Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 421064,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-04T21:07:38.3914292Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:26724cb1013292a869503ea8c35fa5a932e6ec3e4e85e318411373ae6a24478b",
	        "ResolvConfPath": "/var/lib/docker/containers/c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e/hostname",
	        "HostsPath": "/var/lib/docker/containers/c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e/hosts",
	        "LogPath": "/var/lib/docker/containers/c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e/c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e-json.log",
	        "Name": "/functional-434682",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-434682:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-434682",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e",
	                "LowerDir": "/var/lib/docker/overlay2/eb0bbf1c5f29be9a012439922fbd83d9a4ab23f661af8d9d69d65583c4b8f686-init/diff:/var/lib/docker/overlay2/fcc29a201369e5fb579bfafdafe90606e5b86bd2f4b52beb7f6a97f7e955214d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eb0bbf1c5f29be9a012439922fbd83d9a4ab23f661af8d9d69d65583c4b8f686/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eb0bbf1c5f29be9a012439922fbd83d9a4ab23f661af8d9d69d65583c4b8f686/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eb0bbf1c5f29be9a012439922fbd83d9a4ab23f661af8d9d69d65583c4b8f686/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-434682",
	                "Source": "/var/lib/docker/volumes/functional-434682/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-434682",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-434682",
	                "name.minikube.sigs.k8s.io": "functional-434682",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5a1ee1974cafddfa91d00d9aacf8ecbbf723cb04b47fdc840a7a8d178cf57558",
	            "SandboxKey": "/var/run/docker/netns/5a1ee1974caf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33155"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33156"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33157"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-434682": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5e:eb:a6:0f:f0:cd",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e2d7bc8acf0e9f0624cc76f4cbe69fbd7f4637588b37e979a792472035792fd9",
	                    "EndpointID": "9900ed97e30e5355fe09aab9ccab90615da0bf4734544f78893d6f734e005f15",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-434682",
	                        "c103d7054280"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-434682 -n functional-434682
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-434682 logs -n 25: (1.338928122s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                       ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-434682 ssh -- ls -la /mount-9p                                                                         │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │ 04 Sep 25 21:17 UTC │
	│ ssh            │ functional-434682 ssh sudo umount -f /mount-9p                                                                    │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │                     │
	│ mount          │ -p functional-434682 /tmp/TestFunctionalparallelMountCmdVerifyCleanup251258155/001:/mount1 --alsologtostderr -v=1 │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │                     │
	│ ssh            │ functional-434682 ssh findmnt -T /mount1                                                                          │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │                     │
	│ mount          │ -p functional-434682 /tmp/TestFunctionalparallelMountCmdVerifyCleanup251258155/001:/mount2 --alsologtostderr -v=1 │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │                     │
	│ mount          │ -p functional-434682 /tmp/TestFunctionalparallelMountCmdVerifyCleanup251258155/001:/mount3 --alsologtostderr -v=1 │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │                     │
	│ ssh            │ functional-434682 ssh findmnt -T /mount1                                                                          │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │ 04 Sep 25 21:17 UTC │
	│ ssh            │ functional-434682 ssh findmnt -T /mount2                                                                          │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │ 04 Sep 25 21:17 UTC │
	│ ssh            │ functional-434682 ssh findmnt -T /mount3                                                                          │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │ 04 Sep 25 21:17 UTC │
	│ mount          │ -p functional-434682 --kill=true                                                                                  │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │                     │
	│ license        │                                                                                                                   │ minikube          │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │ 04 Sep 25 21:17 UTC │
	│ dashboard      │ --url --port 36195 -p functional-434682 --alsologtostderr -v=1                                                    │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │                     │
	│ start          │ -p functional-434682 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio         │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:20 UTC │                     │
	│ start          │ -p functional-434682 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                   │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:20 UTC │                     │
	│ start          │ -p functional-434682 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio         │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:20 UTC │                     │
	│ update-context │ functional-434682 update-context --alsologtostderr -v=2                                                           │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:20 UTC │ 04 Sep 25 21:20 UTC │
	│ update-context │ functional-434682 update-context --alsologtostderr -v=2                                                           │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:20 UTC │ 04 Sep 25 21:20 UTC │
	│ update-context │ functional-434682 update-context --alsologtostderr -v=2                                                           │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:20 UTC │ 04 Sep 25 21:20 UTC │
	│ image          │ functional-434682 image ls --format short --alsologtostderr                                                       │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:20 UTC │ 04 Sep 25 21:20 UTC │
	│ image          │ functional-434682 image ls --format yaml --alsologtostderr                                                        │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:20 UTC │ 04 Sep 25 21:20 UTC │
	│ ssh            │ functional-434682 ssh pgrep buildkitd                                                                             │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:20 UTC │                     │
	│ image          │ functional-434682 image build -t localhost/my-image:functional-434682 testdata/build --alsologtostderr            │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:20 UTC │ 04 Sep 25 21:20 UTC │
	│ image          │ functional-434682 image ls                                                                                        │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:20 UTC │ 04 Sep 25 21:20 UTC │
	│ image          │ functional-434682 image ls --format json --alsologtostderr                                                        │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:20 UTC │ 04 Sep 25 21:20 UTC │
	│ image          │ functional-434682 image ls --format table --alsologtostderr                                                       │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:20 UTC │ 04 Sep 25 21:20 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 21:20:04
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 21:20:04.993040  441830 out.go:360] Setting OutFile to fd 1 ...
	I0904 21:20:04.993174  441830 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:20:04.993196  441830 out.go:374] Setting ErrFile to fd 2...
	I0904 21:20:04.993204  441830 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:20:04.993516  441830 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-384635/.minikube/bin
	I0904 21:20:04.994106  441830 out.go:368] Setting JSON to false
	I0904 21:20:04.995202  441830 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10954,"bootTime":1757009851,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 21:20:04.995343  441830 start.go:140] virtualization: kvm guest
	I0904 21:20:04.997427  441830 out.go:179] * [functional-434682] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	I0904 21:20:04.999082  441830 out.go:179]   - MINIKUBE_LOCATION=21490
	I0904 21:20:04.999117  441830 notify.go:220] Checking for updates...
	I0904 21:20:05.001703  441830 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 21:20:05.002997  441830 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21490-384635/kubeconfig
	I0904 21:20:05.004192  441830 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-384635/.minikube
	I0904 21:20:05.005525  441830 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 21:20:05.006818  441830 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 21:20:05.008411  441830 config.go:182] Loaded profile config "functional-434682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 21:20:05.008953  441830 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 21:20:05.029798  441830 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 21:20:05.029929  441830 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 21:20:05.076670  441830 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-09-04 21:20:05.067863181 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 21:20:05.076787  441830 docker.go:318] overlay module found
	I0904 21:20:05.079775  441830 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0904 21:20:05.081002  441830 start.go:304] selected driver: docker
	I0904 21:20:05.081019  441830 start.go:918] validating driver "docker" against &{Name:functional-434682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-434682 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 21:20:05.081107  441830 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 21:20:05.083069  441830 out.go:203] 
	W0904 21:20:05.084193  441830 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0904 21:20:05.085390  441830 out.go:203] 
	
	
	==> CRI-O <==
	Sep 04 21:18:44 functional-434682 crio[4932]: time="2025-09-04 21:18:44.147100486Z" level=info msg="Image docker.io/mysql:5.7 not found" id=965103ce-073d-43fd-b9f5-febde15858d5 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:18:47 functional-434682 crio[4932]: time="2025-09-04 21:18:47.146628739Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=9882b719-6943-43ee-947d-a3d72a8a15f7 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:18:47 functional-434682 crio[4932]: time="2025-09-04 21:18:47.146947556Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=9882b719-6943-43ee-947d-a3d72a8a15f7 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:18:58 functional-434682 crio[4932]: time="2025-09-04 21:18:58.147052226Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=cff9d5f6-e743-437e-b3bf-44cd36353140 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:18:58 functional-434682 crio[4932]: time="2025-09-04 21:18:58.147332186Z" level=info msg="Image docker.io/mysql:5.7 not found" id=cff9d5f6-e743-437e-b3bf-44cd36353140 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:19:04 functional-434682 crio[4932]: time="2025-09-04 21:19:04.425736504Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=ec8753b8-8eaa-4c30-9f53-aa102d087c42 name=/runtime.v1.ImageService/PullImage
	Sep 04 21:19:04 functional-434682 crio[4932]: time="2025-09-04 21:19:04.426451323Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=79d7245b-593b-438e-bf27-b5d90230fb2c name=/runtime.v1.ImageService/PullImage
	Sep 04 21:19:04 functional-434682 crio[4932]: time="2025-09-04 21:19:04.427108993Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=a6ba0ec1-e884-444d-8b03-ea239a85211d name=/runtime.v1.ImageService/PullImage
	Sep 04 21:19:04 functional-434682 crio[4932]: time="2025-09-04 21:19:04.443337769Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Sep 04 21:19:05 functional-434682 crio[4932]: time="2025-09-04 21:19:05.407472132Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=4cae01cc-db68-4bd4-b498-88cec2521582 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:19:05 functional-434682 crio[4932]: time="2025-09-04 21:19:05.407736794Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=4cae01cc-db68-4bd4-b498-88cec2521582 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:19:11 functional-434682 crio[4932]: time="2025-09-04 21:19:11.146902799Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=f9ef55c8-6536-4270-bacb-d0c37df92c0b name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:19:11 functional-434682 crio[4932]: time="2025-09-04 21:19:11.147140984Z" level=info msg="Image docker.io/mysql:5.7 not found" id=f9ef55c8-6536-4270-bacb-d0c37df92c0b name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:19:20 functional-434682 crio[4932]: time="2025-09-04 21:19:20.146942089Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=2f425f04-de8b-4be4-9ece-d350e11c7a1b name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:19:20 functional-434682 crio[4932]: time="2025-09-04 21:19:20.147294990Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=2f425f04-de8b-4be4-9ece-d350e11c7a1b name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:19:23 functional-434682 crio[4932]: time="2025-09-04 21:19:23.146399843Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=fb43ccb5-9052-4ffb-9ac8-e4e2085cff06 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:19:23 functional-434682 crio[4932]: time="2025-09-04 21:19:23.146636684Z" level=info msg="Image docker.io/mysql:5.7 not found" id=fb43ccb5-9052-4ffb-9ac8-e4e2085cff06 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:19:37 functional-434682 crio[4932]: time="2025-09-04 21:19:37.146480224Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=5620525b-e260-4ba0-adca-49d76c17d952 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:19:37 functional-434682 crio[4932]: time="2025-09-04 21:19:37.146703407Z" level=info msg="Image docker.io/mysql:5.7 not found" id=5620525b-e260-4ba0-adca-49d76c17d952 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:19:51 functional-434682 crio[4932]: time="2025-09-04 21:19:51.146729504Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=91d6d994-f755-4ece-8a85-d4f3aacbc88f name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:19:51 functional-434682 crio[4932]: time="2025-09-04 21:19:51.147016053Z" level=info msg="Image docker.io/mysql:5.7 not found" id=91d6d994-f755-4ece-8a85-d4f3aacbc88f name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:20:03 functional-434682 crio[4932]: time="2025-09-04 21:20:03.146562187Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=d8d595b7-4205-4544-a609-ccacf2296772 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:20:03 functional-434682 crio[4932]: time="2025-09-04 21:20:03.146835071Z" level=info msg="Image docker.io/mysql:5.7 not found" id=d8d595b7-4205-4544-a609-ccacf2296772 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:20:05 functional-434682 crio[4932]: time="2025-09-04 21:20:05.624254482Z" level=info msg="Pulling image: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=43a589e2-14c5-4ab1-8267-335d5ba48124 name=/runtime.v1.ImageService/PullImage
	Sep 04 21:20:05 functional-434682 crio[4932]: time="2025-09-04 21:20:05.628455999Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7573ae495fc6e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   2 minutes ago       Exited              mount-munger              0                   67f5c151d8c01       busybox-mount
	67e1ba685c7ce       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      10 minutes ago      Running             coredns                   2                   59a2b1cb9d90f       coredns-66bc5c9577-dcjtm
	02a541cc04bce       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      10 minutes ago      Running             kube-proxy                2                   bf24b3b007d85       kube-proxy-kjb6r
	c73eddf3856d6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Running             storage-provisioner       3                   83ab70c3927ef       storage-provisioner
	5f0798737b0ee       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      10 minutes ago      Running             kindnet-cni               2                   802a7e58a9fc4       kindnet-6t97w
	d1ff769d1c0c6       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                      10 minutes ago      Running             kube-apiserver            0                   765688e92610c       kube-apiserver-functional-434682
	48f5af473405f       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      10 minutes ago      Running             kube-controller-manager   2                   473b1b2117910       kube-controller-manager-functional-434682
	dcfbf9517d0ea       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      10 minutes ago      Running             kube-scheduler            2                   3d774d9c25044       kube-scheduler-functional-434682
	8957f84301c86       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      10 minutes ago      Running             etcd                      2                   6d1a127f36d21       etcd-functional-434682
	9aae164b6b622       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 minutes ago      Exited              storage-provisioner       2                   83ab70c3927ef       storage-provisioner
	d1bf470c95037       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 minutes ago      Exited              coredns                   1                   59a2b1cb9d90f       coredns-66bc5c9577-dcjtm
	0fd149fb30af7       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      11 minutes ago      Exited              kindnet-cni               1                   802a7e58a9fc4       kindnet-6t97w
	147b1660e0ddf       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      11 minutes ago      Exited              etcd                      1                   6d1a127f36d21       etcd-functional-434682
	64aaae8d657da       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      11 minutes ago      Exited              kube-proxy                1                   bf24b3b007d85       kube-proxy-kjb6r
	753221e723ff2       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      11 minutes ago      Exited              kube-scheduler            1                   3d774d9c25044       kube-scheduler-functional-434682
	279dbefad3613       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      11 minutes ago      Exited              kube-controller-manager   1                   473b1b2117910       kube-controller-manager-functional-434682
	
	
	==> coredns [67e1ba685c7ce4de1937d4603c36309e97a95bbef649db8b26cebb6ca66c20eb] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46904 - 52148 "HINFO IN 4965619897574769528.1949479442252656424. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.070291104s
	
	
	==> coredns [d1bf470c950376569202b921adaecbf97801a2ebfe9fffcfc07500259775d103] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40539 - 31795 "HINFO IN 6716744690789338561.844443993120435347. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.092547911s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-434682
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-434682
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d82f852837f628b3930700b19196c39855cd258a
	                    minikube.k8s.io/name=functional-434682
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_04T21_07_53_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Sep 2025 21:07:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-434682
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Sep 2025 21:20:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Sep 2025 21:17:39 +0000   Thu, 04 Sep 2025 21:07:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Sep 2025 21:17:39 +0000   Thu, 04 Sep 2025 21:07:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Sep 2025 21:17:39 +0000   Thu, 04 Sep 2025 21:07:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Sep 2025 21:17:39 +0000   Thu, 04 Sep 2025 21:08:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-434682
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	System Info:
	  Machine ID:                 01aa6837660246f486ffcb896223620f
	  System UUID:                41151989-98b1-4281-8655-a34190cf40fb
	  Boot ID:                    d34ed5fc-a148-45de-9a0e-f744d5f792e8
	  Kernel Version:             5.15.0-1083-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-8n82x                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	  default                     hello-node-connect-7d85dfc575-84wxx           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-wzh2r                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-dcjtm                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-434682                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-6t97w                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-434682              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-434682     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-kjb6r                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-434682              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-xj66s    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-5nqcb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-434682 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-434682 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-434682 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-434682 event: Registered Node functional-434682 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-434682 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-434682 event: Registered Node functional-434682 in Controller
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-434682 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-434682 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-434682 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-434682 event: Registered Node functional-434682 in Controller
	
	
	==> dmesg <==
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000001] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +2.015727] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000007] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +0.000000] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000003] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +4.127589] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000006] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +0.000000] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000002] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +8.191103] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000006] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[  +0.000017] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5095d6e5e40d
	[  +0.000002] ll header: 00000000: 26 2a 2b 01 f7 fb 8e 05 15 8b 95 fa 08 00
	[Sep 4 21:17] FS-Cache: Duplicate cookie detected
	[  +0.004735] FS-Cache: O-cookie c=00000024 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006741] FS-Cache: O-cookie d=00000000e536cbeb{9P.session} n=0000000049c43b6f
	[  +0.007557] FS-Cache: O-key=[10] '34323937353933383032'
	[  +0.005349] FS-Cache: N-cookie c=00000025 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006948] FS-Cache: N-cookie d=00000000e536cbeb{9P.session} n=000000005af789fc
	[  +0.008907] FS-Cache: N-key=[10] '34323937353933383032'
	
	
	==> etcd [147b1660e0ddf97548b185775163fa70312e80a72b723002562b6c48722dc082] <==
	{"level":"warn","ts":"2025-09-04T21:08:53.582396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:08:53.597054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:08:53.603187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:08:53.677082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:08:53.684274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:08:53.691194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:08:53.754454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36856","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-04T21:09:17.209581Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-04T21:09:17.209688Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-434682","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-04T21:09:17.209779Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-04T21:09:17.488998Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-04T21:09:17.490450Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-04T21:09:17.490506Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-09-04T21:09:17.490516Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-04T21:09:17.490584Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-09-04T21:09:17.490587Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"error","ts":"2025-09-04T21:09:17.490593Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-04T21:09:17.490524Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-04T21:09:17.490611Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-04T21:09:17.490618Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-04T21:09:17.490589Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-04T21:09:17.493741Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-04T21:09:17.493803Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-04T21:09:17.493833Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-04T21:09:17.493842Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-434682","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [8957f84301c86329b1ba346dc71da62fb29a19e0e4117e03b663512e68b198f2] <==
	{"level":"warn","ts":"2025-09-04T21:09:37.287614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.294726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.349193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.355340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.362426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.368780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.388971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.394843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.401550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.407730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.413741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.420262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.452212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.460028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.467015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.472974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.479382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.486266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.523417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.544995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.551759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:09:37.607583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43872","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-04T21:19:36.795428Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1015}
	{"level":"info","ts":"2025-09-04T21:19:36.813729Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1015,"took":"17.971289ms","hash":913127590,"current-db-size-bytes":3592192,"current-db-size":"3.6 MB","current-db-size-in-use-bytes":1683456,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-09-04T21:19:36.813774Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":913127590,"revision":1015,"compact-revision":-1}
	
	
	==> kernel <==
	 21:20:10 up  3:02,  0 users,  load average: 0.42, 0.35, 0.43
	Linux functional-434682 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [0fd149fb30af74d582046fc60a9f80ce2cd48cec39f94992069066c32e3c7cb2] <==
	I0904 21:08:51.952269       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0904 21:08:51.952686       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0904 21:08:51.952925       1 main.go:148] setting mtu 1500 for CNI 
	I0904 21:08:51.952942       1 main.go:178] kindnetd IP family: "ipv4"
	I0904 21:08:51.952956       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-04T21:08:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0904 21:08:52.247356       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0904 21:08:52.247435       1 controller.go:381] "Waiting for informer caches to sync"
	I0904 21:08:52.247473       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0904 21:08:52.347068       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0904 21:08:54.552811       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"kindnet\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I0904 21:08:55.448192       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0904 21:08:55.448221       1 metrics.go:72] Registering metrics
	I0904 21:08:55.448274       1 controller.go:711] "Syncing nftables rules"
	I0904 21:09:02.247716       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:09:02.247793       1 main.go:301] handling current node
	I0904 21:09:12.248838       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:09:12.248888       1 main.go:301] handling current node
	
	
	==> kindnet [5f0798737b0ee775905f7ed84860ba0292fcc66d918bbe2bed6f39454e2ef304] <==
	I0904 21:18:09.952080       1 main.go:301] handling current node
	I0904 21:18:19.949963       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:18:19.949993       1 main.go:301] handling current node
	I0904 21:18:29.949220       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:18:29.949253       1 main.go:301] handling current node
	I0904 21:18:39.949108       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:18:39.949138       1 main.go:301] handling current node
	I0904 21:18:49.948927       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:18:49.948966       1 main.go:301] handling current node
	I0904 21:18:59.953829       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:18:59.953864       1 main.go:301] handling current node
	I0904 21:19:09.949238       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:19:09.949272       1 main.go:301] handling current node
	I0904 21:19:19.949388       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:19:19.949418       1 main.go:301] handling current node
	I0904 21:19:29.949325       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:19:29.949357       1 main.go:301] handling current node
	I0904 21:19:39.948926       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:19:39.948957       1 main.go:301] handling current node
	I0904 21:19:49.949963       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:19:49.949995       1 main.go:301] handling current node
	I0904 21:19:59.953254       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:19:59.953289       1 main.go:301] handling current node
	I0904 21:20:09.951342       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:20:09.951374       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d1ff769d1c0c6befb61371579e156cb8df3f874152f997a641d98dbfa7a31c3d] <==
	I0904 21:09:56.962338       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0904 21:10:01.956117       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.102.48.219"}
	I0904 21:10:01.995357       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0904 21:10:04.115686       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.107.112.123"}
	I0904 21:10:08.560431       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.105.2.142"}
	I0904 21:10:42.059430       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:10:59.832577       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:11:46.403941       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:12:18.795065       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:12:56.359284       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:13:46.928701       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:13:59.740504       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:14:57.548567       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:15:16.202697       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:15:40.741421       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.152.103"}
	I0904 21:16:21.853186       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:16:34.347299       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:17:27.098406       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:17:41.429731       1 controller.go:667] quota admission added evaluator for: namespaces
	I0904 21:17:41.566220       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.253.138"}
	I0904 21:17:41.578326       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.133.102"}
	I0904 21:17:53.583223       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:18:44.345042       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:19:10.962589       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 21:19:38.264685       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [279dbefad36138dca3e8b3083d9738f2befb8c0fdb16fc5221ce1d1045032b84] <==
	I0904 21:08:57.719051       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0904 21:08:57.744869       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0904 21:08:57.744894       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0904 21:08:57.744915       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0904 21:08:57.767124       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0904 21:08:57.767137       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0904 21:08:57.767170       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0904 21:08:57.767197       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0904 21:08:57.767206       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0904 21:08:57.767231       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0904 21:08:57.768391       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0904 21:08:57.770586       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0904 21:08:57.770622       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0904 21:08:57.770629       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0904 21:08:57.770710       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0904 21:08:57.770718       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0904 21:08:57.770747       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0904 21:08:57.770754       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0904 21:08:57.770759       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0904 21:08:57.771926       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0904 21:08:57.773107       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0904 21:08:57.775353       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0904 21:08:57.775454       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0904 21:08:57.779630       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0904 21:08:57.793917       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [48f5af473405f59b41d3ffd7c662a3d9baaa4584f9ffb7005f35ae65089b9947] <==
	I0904 21:09:41.654206       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0904 21:09:41.654231       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0904 21:09:41.654293       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0904 21:09:41.654357       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0904 21:09:41.654362       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-434682"
	I0904 21:09:41.654434       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0904 21:09:41.655382       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0904 21:09:41.656502       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0904 21:09:41.657593       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0904 21:09:41.658719       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0904 21:09:41.659792       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0904 21:09:41.659814       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0904 21:09:41.660983       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0904 21:09:41.663240       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0904 21:09:41.663330       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0904 21:09:41.663339       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0904 21:09:41.664527       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0904 21:09:41.665730       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0904 21:09:41.671737       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0904 21:17:41.471820       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0904 21:17:41.476246       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0904 21:17:41.477478       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0904 21:17:41.479895       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0904 21:17:41.481746       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0904 21:17:41.486362       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [02a541cc04bcecd4ac0c7cca2da8d9603b29a892bdad1bb5632d1ac58ce87821] <==
	I0904 21:09:39.581491       1 server_linux.go:53] "Using iptables proxy"
	I0904 21:09:39.698014       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0904 21:09:39.798964       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0904 21:09:39.799006       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0904 21:09:39.799149       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 21:09:39.820088       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0904 21:09:39.820137       1 server_linux.go:132] "Using iptables Proxier"
	I0904 21:09:39.824521       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 21:09:39.824875       1 server.go:527] "Version info" version="v1.34.0"
	I0904 21:09:39.824897       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 21:09:39.826297       1 config.go:200] "Starting service config controller"
	I0904 21:09:39.826311       1 config.go:106] "Starting endpoint slice config controller"
	I0904 21:09:39.826324       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0904 21:09:39.826334       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0904 21:09:39.826364       1 config.go:403] "Starting serviceCIDR config controller"
	I0904 21:09:39.826370       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0904 21:09:39.826388       1 config.go:309] "Starting node config controller"
	I0904 21:09:39.826393       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0904 21:09:39.826400       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0904 21:09:39.927396       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0904 21:09:39.927434       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0904 21:09:39.927442       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [64aaae8d657daf42d90854673aa7ce9152f3f2314bae53b22f99da336581d403] <==
	I0904 21:08:51.870065       1 server_linux.go:53] "Using iptables proxy"
	I0904 21:08:52.166652       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0904 21:08:54.551281       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"functional-434682\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0904 21:08:56.067212       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0904 21:08:56.067257       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0904 21:08:56.067329       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 21:08:56.086742       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0904 21:08:56.086795       1 server_linux.go:132] "Using iptables Proxier"
	I0904 21:08:56.091114       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 21:08:56.091647       1 server.go:527] "Version info" version="v1.34.0"
	I0904 21:08:56.091678       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 21:08:56.092896       1 config.go:200] "Starting service config controller"
	I0904 21:08:56.092923       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0904 21:08:56.092933       1 config.go:106] "Starting endpoint slice config controller"
	I0904 21:08:56.092944       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0904 21:08:56.092964       1 config.go:403] "Starting serviceCIDR config controller"
	I0904 21:08:56.093000       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0904 21:08:56.093028       1 config.go:309] "Starting node config controller"
	I0904 21:08:56.093041       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0904 21:08:56.093051       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0904 21:08:56.193435       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0904 21:08:56.193472       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0904 21:08:56.193511       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [753221e723ff279278be4081dd108fed0fe299d51e26bf89fcbd7b19210b8ee2] <==
	I0904 21:08:52.682221       1 serving.go:386] Generated self-signed cert in-memory
	W0904 21:08:54.347449       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0904 21:08:54.347601       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0904 21:08:54.347674       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0904 21:08:54.347716       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0904 21:08:54.460261       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0904 21:08:54.460303       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 21:08:54.467172       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0904 21:08:54.467450       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 21:08:54.467475       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 21:08:54.467560       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0904 21:08:54.553912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0904 21:08:54.553960       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0904 21:08:54.557079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I0904 21:08:54.567733       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 21:09:17.210279       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0904 21:09:17.210329       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0904 21:09:17.210361       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0904 21:09:17.210412       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 21:09:17.210620       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0904 21:09:17.210645       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [dcfbf9517d0ea85a697b2576bca9726545016d4d2a7bd6c9ba9e771af9db338e] <==
	I0904 21:09:36.468599       1 serving.go:386] Generated self-signed cert in-memory
	W0904 21:09:38.190339       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0904 21:09:38.190580       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0904 21:09:38.190649       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0904 21:09:38.190684       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0904 21:09:38.367009       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0904 21:09:38.370007       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 21:09:38.372630       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 21:09:38.372665       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 21:09:38.373087       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0904 21:09:38.373496       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0904 21:09:38.472872       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 04 21:19:35 functional-434682 kubelet[5296]: E0904 21:19:35.278243    5296 manager.go:1116] Failed to create existing container: /crio-802a7e58a9fc422680d44b60e8cd9a9f345b52efcad3aa6f33c4b1af0a7d5ee2: Error finding container 802a7e58a9fc422680d44b60e8cd9a9f345b52efcad3aa6f33c4b1af0a7d5ee2: Status 404 returned error can't find the container with id 802a7e58a9fc422680d44b60e8cd9a9f345b52efcad3aa6f33c4b1af0a7d5ee2
	Sep 04 21:19:35 functional-434682 kubelet[5296]: E0904 21:19:35.278394    5296 manager.go:1116] Failed to create existing container: /crio-473b1b2117910f3427d7146f0bf589c4eb7ff9584f7024ab9801389a70b6fa29: Error finding container 473b1b2117910f3427d7146f0bf589c4eb7ff9584f7024ab9801389a70b6fa29: Status 404 returned error can't find the container with id 473b1b2117910f3427d7146f0bf589c4eb7ff9584f7024ab9801389a70b6fa29
	Sep 04 21:19:35 functional-434682 kubelet[5296]: E0904 21:19:35.278601    5296 manager.go:1116] Failed to create existing container: /docker/c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e/crio-bf24b3b007d859a7cfa0a5ca7264ba7f9516fbbd22c33d1533bcd5c90c059a58: Error finding container bf24b3b007d859a7cfa0a5ca7264ba7f9516fbbd22c33d1533bcd5c90c059a58: Status 404 returned error can't find the container with id bf24b3b007d859a7cfa0a5ca7264ba7f9516fbbd22c33d1533bcd5c90c059a58
	Sep 04 21:19:35 functional-434682 kubelet[5296]: E0904 21:19:35.278797    5296 manager.go:1116] Failed to create existing container: /docker/c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e/crio-59a2b1cb9d90fca20ac3fb152dcd0c4fbbc8c485fba72d7f3fd024b71a2eb05c: Error finding container 59a2b1cb9d90fca20ac3fb152dcd0c4fbbc8c485fba72d7f3fd024b71a2eb05c: Status 404 returned error can't find the container with id 59a2b1cb9d90fca20ac3fb152dcd0c4fbbc8c485fba72d7f3fd024b71a2eb05c
	Sep 04 21:19:35 functional-434682 kubelet[5296]: E0904 21:19:35.278967    5296 manager.go:1116] Failed to create existing container: /docker/c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e/crio-83ab70c3927ef253fa508910456d1ab9921a8c3d28db082f4300dc67b29a9afb: Error finding container 83ab70c3927ef253fa508910456d1ab9921a8c3d28db082f4300dc67b29a9afb: Status 404 returned error can't find the container with id 83ab70c3927ef253fa508910456d1ab9921a8c3d28db082f4300dc67b29a9afb
	Sep 04 21:19:35 functional-434682 kubelet[5296]: E0904 21:19:35.279130    5296 manager.go:1116] Failed to create existing container: /crio-6d1a127f36d2119ad4fd32598c22f59cc8794d4d1ecb22b01dffad1d88365f86: Error finding container 6d1a127f36d2119ad4fd32598c22f59cc8794d4d1ecb22b01dffad1d88365f86: Status 404 returned error can't find the container with id 6d1a127f36d2119ad4fd32598c22f59cc8794d4d1ecb22b01dffad1d88365f86
	Sep 04 21:19:35 functional-434682 kubelet[5296]: E0904 21:19:35.359656    5296 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757020775359436222  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 04 21:19:35 functional-434682 kubelet[5296]: E0904 21:19:35.359694    5296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757020775359436222  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 04 21:19:37 functional-434682 kubelet[5296]: E0904 21:19:37.146989    5296 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-wzh2r" podUID="a63863a4-7fbe-4d12-b15a-fcfb930c1a96"
	Sep 04 21:19:42 functional-434682 kubelet[5296]: E0904 21:19:42.145885    5296 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-84wxx" podUID="859f8e7f-550a-454e-86cf-f3683973631c"
	Sep 04 21:19:44 functional-434682 kubelet[5296]: E0904 21:19:44.145842    5296 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-8n82x" podUID="795d04f0-d02a-434e-a4ab-297ee10360de"
	Sep 04 21:19:45 functional-434682 kubelet[5296]: E0904 21:19:45.360823    5296 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757020785360591683  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 04 21:19:45 functional-434682 kubelet[5296]: E0904 21:19:45.360854    5296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757020785360591683  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 04 21:19:51 functional-434682 kubelet[5296]: E0904 21:19:51.147316    5296 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-wzh2r" podUID="a63863a4-7fbe-4d12-b15a-fcfb930c1a96"
	Sep 04 21:19:55 functional-434682 kubelet[5296]: E0904 21:19:55.147017    5296 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-84wxx" podUID="859f8e7f-550a-454e-86cf-f3683973631c"
	Sep 04 21:19:55 functional-434682 kubelet[5296]: E0904 21:19:55.361935    5296 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757020795361716738  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 04 21:19:55 functional-434682 kubelet[5296]: E0904 21:19:55.361975    5296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757020795361716738  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 04 21:20:03 functional-434682 kubelet[5296]: E0904 21:20:03.147131    5296 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-wzh2r" podUID="a63863a4-7fbe-4d12-b15a-fcfb930c1a96"
	Sep 04 21:20:05 functional-434682 kubelet[5296]: E0904 21:20:05.363447    5296 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757020805363153490  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 04 21:20:05 functional-434682 kubelet[5296]: E0904 21:20:05.363495    5296 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757020805363153490  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 04 21:20:05 functional-434682 kubelet[5296]: E0904 21:20:05.623818    5296 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 04 21:20:05 functional-434682 kubelet[5296]: E0904 21:20:05.623883    5296 kuberuntime_image.go:43] "Failed to pull image" err="loading manifest for target platform: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 04 21:20:05 functional-434682 kubelet[5296]: E0904 21:20:05.624081    5296 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx-svc_default(1ec180eb-eb78-40a3-aab9-f321efb0233d): ErrImagePull: loading manifest for target platform: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 04 21:20:05 functional-434682 kubelet[5296]: E0904 21:20:05.624147    5296 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"loading manifest for target platform: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="1ec180eb-eb78-40a3-aab9-f321efb0233d"
	Sep 04 21:20:10 functional-434682 kubelet[5296]: E0904 21:20:10.146302    5296 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-84wxx" podUID="859f8e7f-550a-454e-86cf-f3683973631c"
	
	
	==> storage-provisioner [9aae164b6b622e57cfa63c0d10780b1d415a846e61eef71f81fb632601cbf077] <==
	I0904 21:09:06.126015       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0904 21:09:06.133313       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0904 21:09:06.133358       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0904 21:09:06.135228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:09:09.590166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:09:13.849978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c73eddf3856d61a0ae21842a7d5d9054379b12e7b84ee634289addb01adb5957] <==
	W0904 21:19:45.123617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:19:47.126344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:19:47.131184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:19:49.134482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:19:49.138189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:19:51.140719       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:19:51.144717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:19:53.147918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:19:53.152827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:19:55.158601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:19:55.161726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:19:57.164684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:19:57.168268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:19:59.171444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:19:59.176575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:20:01.179700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:20:01.183686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:20:03.187682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:20:03.192164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:20:05.195416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:20:05.200842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:20:07.203522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:20:07.209069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:20:09.212032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 21:20:09.217345       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-434682 -n functional-434682
helpers_test.go:269: (dbg) Run:  kubectl --context functional-434682 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-8n82x hello-node-connect-7d85dfc575-84wxx mysql-5bb876957f-wzh2r nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-xj66s kubernetes-dashboard-855c9754f9-5nqcb
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-434682 describe pod busybox-mount hello-node-75c85bcc94-8n82x hello-node-connect-7d85dfc575-84wxx mysql-5bb876957f-wzh2r nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-xj66s kubernetes-dashboard-855c9754f9-5nqcb
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-434682 describe pod busybox-mount hello-node-75c85bcc94-8n82x hello-node-connect-7d85dfc575-84wxx mysql-5bb876957f-wzh2r nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-xj66s kubernetes-dashboard-855c9754f9-5nqcb: exit status 1 (94.676835ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-434682/192.168.49.2
	Start Time:       Thu, 04 Sep 2025 21:16:14 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://7573ae495fc6e3c79942ee196188587a2b74c897d904477d1231fc3ca6208b33
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 04 Sep 2025 21:17:32 +0000
	      Finished:     Thu, 04 Sep 2025 21:17:32 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h9rcr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-h9rcr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  3m57s  default-scheduler  Successfully assigned default/busybox-mount to functional-434682
	  Normal  Pulling    3m57s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     2m39s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.225s (1m18.121s including waiting). Image size: 4631262 bytes.
	  Normal  Created    2m39s  kubelet            Created container: mount-munger
	  Normal  Started    2m39s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-8n82x
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-434682/192.168.49.2
	Start Time:       Thu, 04 Sep 2025 21:15:40 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6kdmp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6kdmp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  4m31s                default-scheduler  Successfully assigned default/hello-node-75c85bcc94-8n82x to functional-434682
	  Warning  Failed     67s (x3 over 3m41s)  kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     67s (x3 over 3m41s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    27s (x5 over 3m41s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     27s (x5 over 3m41s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    14s (x4 over 4m30s)  kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-84wxx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-434682/192.168.49.2
	Start Time:       Thu, 04 Sep 2025 21:10:08 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zbxg8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zbxg8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-84wxx to functional-434682
	  Normal   Pulling    2m14s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     67s (x5 over 9m2s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     67s (x5 over 9m2s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    1s (x16 over 9m2s)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     1s (x16 over 9m2s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             mysql-5bb876957f-wzh2r
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-434682/192.168.49.2
	Start Time:       Thu, 04 Sep 2025 21:10:02 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hc4jr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hc4jr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/mysql-5bb876957f-wzh2r to functional-434682
	  Warning  Failed     9m33s                 kubelet            Failed to pull image "docker.io/mysql:5.7": initializing source docker://mysql:5.7: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m47s                 kubelet            Failed to pull image "docker.io/mysql:5.7": loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m41s (x5 over 10m)   kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     2m8s (x5 over 9m33s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m8s (x3 over 6m15s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     60s (x16 over 9m32s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    8s (x20 over 9m32s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-434682/192.168.49.2
	Start Time:       Thu, 04 Sep 2025 21:10:04 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zhxrs (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zhxrs:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/nginx-svc to functional-434682
	  Warning  Failed     3m41s (x3 over 9m2s)   kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    2m26s (x10 over 9m2s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2m26s (x10 over 9m2s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    2m13s (x5 over 10m)    kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     6s (x5 over 9m2s)      kubelet            Error: ErrImagePull
	  Warning  Failed     6s (x2 over 5m13s)     kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-434682/192.168.49.2
	Start Time:       Thu, 04 Sep 2025 21:10:08 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bx7br (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-bx7br:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/sp-pod to functional-434682
	  Warning  Failed     4m43s (x3 over 8m32s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m40s (x4 over 8m32s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m40s                  kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:f15190cd0aed34df2541e6a569d349858dd83fe2a519d7c0ec023133b6d3c4f7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    84s (x10 over 8m31s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     84s (x10 over 8m31s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    69s (x5 over 10m)      kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-xj66s" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-5nqcb" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-434682 describe pod busybox-mount hello-node-75c85bcc94-8n82x hello-node-connect-7d85dfc575-84wxx mysql-5bb876957f-wzh2r nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-xj66s kubernetes-dashboard-855c9754f9-5nqcb: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.83s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (367.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [d6ed1932-aaa0-4085-9f5d-f85a94620423] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00292978s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-434682 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-434682 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-434682 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-434682 apply -f testdata/storage-provisioner/pod.yaml
I0904 21:10:08.884649  388360 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [041c009e-af9d-4eb6-a22e-20603c327a58] Pending
helpers_test.go:352: "sp-pod" [041c009e-af9d-4eb6-a22e-20603c327a58] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0904 21:10:55.935158  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:13:12.066481  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:13:39.777257  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-434682 -n functional-434682
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-09-04 21:16:09.172784547 +0000 UTC m=+1246.016387123
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-434682 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-434682 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-434682/192.168.49.2
Start Time:       Thu, 04 Sep 2025 21:10:08 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:  10.244.0.7
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bx7br (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-bx7br:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  6m1s                 default-scheduler  Successfully assigned default/sp-pod to functional-434682
Normal   Pulling    2m19s (x3 over 6m)   kubelet            Pulling image "docker.io/nginx"
Warning  Failed     41s (x3 over 4m30s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     41s (x3 over 4m30s)  kubelet            Error: ErrImagePull
Normal   BackOff    11s (x4 over 4m29s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     11s (x4 over 4m29s)  kubelet            Error: ImagePullBackOff
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-434682 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-434682 logs sp-pod -n default: exit status 1 (64.992656ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-434682 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-434682
helpers_test.go:243: (dbg) docker inspect functional-434682:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e",
	        "Created": "2025-09-04T21:07:38.362965102Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 421064,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-04T21:07:38.3914292Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:26724cb1013292a869503ea8c35fa5a932e6ec3e4e85e318411373ae6a24478b",
	        "ResolvConfPath": "/var/lib/docker/containers/c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e/hostname",
	        "HostsPath": "/var/lib/docker/containers/c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e/hosts",
	        "LogPath": "/var/lib/docker/containers/c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e/c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e-json.log",
	        "Name": "/functional-434682",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-434682:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-434682",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e",
	                "LowerDir": "/var/lib/docker/overlay2/eb0bbf1c5f29be9a012439922fbd83d9a4ab23f661af8d9d69d65583c4b8f686-init/diff:/var/lib/docker/overlay2/fcc29a201369e5fb579bfafdafe90606e5b86bd2f4b52beb7f6a97f7e955214d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eb0bbf1c5f29be9a012439922fbd83d9a4ab23f661af8d9d69d65583c4b8f686/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eb0bbf1c5f29be9a012439922fbd83d9a4ab23f661af8d9d69d65583c4b8f686/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eb0bbf1c5f29be9a012439922fbd83d9a4ab23f661af8d9d69d65583c4b8f686/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-434682",
	                "Source": "/var/lib/docker/volumes/functional-434682/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-434682",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-434682",
	                "name.minikube.sigs.k8s.io": "functional-434682",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5a1ee1974cafddfa91d00d9aacf8ecbbf723cb04b47fdc840a7a8d178cf57558",
	            "SandboxKey": "/var/run/docker/netns/5a1ee1974caf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33155"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33156"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33157"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-434682": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5e:eb:a6:0f:f0:cd",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e2d7bc8acf0e9f0624cc76f4cbe69fbd7f4637588b37e979a792472035792fd9",
	                    "EndpointID": "9900ed97e30e5355fe09aab9ccab90615da0bf4734544f78893d6f734e005f15",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-434682",
	                        "c103d7054280"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-434682 -n functional-434682
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-434682 logs -n 25: (1.297941019s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-434682 ssh sudo cat /usr/share/ca-certificates/3883602.pem                                                                                           │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:10 UTC │ 04 Sep 25 21:10 UTC │
	│ cp      │ functional-434682 cp functional-434682:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1235956994/001/cp-test.txt                                      │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:10 UTC │ 04 Sep 25 21:10 UTC │
	│ image   │ functional-434682 image load --daemon kicbase/echo-server:functional-434682 --alsologtostderr                                                                   │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:10 UTC │ 04 Sep 25 21:10 UTC │
	│ ssh     │ functional-434682 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:10 UTC │ 04 Sep 25 21:10 UTC │
	│ ssh     │ functional-434682 ssh -n functional-434682 sudo cat /home/docker/cp-test.txt                                                                                    │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:10 UTC │ 04 Sep 25 21:10 UTC │
	│ ssh     │ functional-434682 ssh echo hello                                                                                                                                │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:10 UTC │ 04 Sep 25 21:10 UTC │
	│ cp      │ functional-434682 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                       │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:10 UTC │ 04 Sep 25 21:10 UTC │
	│ ssh     │ functional-434682 ssh cat /etc/hostname                                                                                                                         │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:10 UTC │ 04 Sep 25 21:10 UTC │
	│ ssh     │ functional-434682 ssh -n functional-434682 sudo cat /tmp/does/not/exist/cp-test.txt                                                                             │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:10 UTC │ 04 Sep 25 21:10 UTC │
	│ tunnel  │ functional-434682 tunnel --alsologtostderr                                                                                                                      │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:10 UTC │                     │
	│ tunnel  │ functional-434682 tunnel --alsologtostderr                                                                                                                      │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:10 UTC │                     │
	│ image   │ functional-434682 image ls                                                                                                                                      │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:10 UTC │ 04 Sep 25 21:10 UTC │
	│ tunnel  │ functional-434682 tunnel --alsologtostderr                                                                                                                      │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:10 UTC │                     │
	│ image   │ functional-434682 image load --daemon kicbase/echo-server:functional-434682 --alsologtostderr                                                                   │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:10 UTC │ 04 Sep 25 21:10 UTC │
	│ image   │ functional-434682 image ls                                                                                                                                      │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:10 UTC │ 04 Sep 25 21:10 UTC │
	│ image   │ functional-434682 image load --daemon kicbase/echo-server:functional-434682 --alsologtostderr                                                                   │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:10 UTC │ 04 Sep 25 21:10 UTC │
	│ image   │ functional-434682 image ls                                                                                                                                      │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:10 UTC │ 04 Sep 25 21:10 UTC │
	│ image   │ functional-434682 image save kicbase/echo-server:functional-434682 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:10 UTC │ 04 Sep 25 21:10 UTC │
	│ image   │ functional-434682 image rm kicbase/echo-server:functional-434682 --alsologtostderr                                                                              │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:10 UTC │ 04 Sep 25 21:10 UTC │
	│ image   │ functional-434682 image ls                                                                                                                                      │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:10 UTC │ 04 Sep 25 21:10 UTC │
	│ image   │ functional-434682 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:10 UTC │ 04 Sep 25 21:10 UTC │
	│ image   │ functional-434682 image ls                                                                                                                                      │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:10 UTC │ 04 Sep 25 21:10 UTC │
	│ image   │ functional-434682 image save --daemon kicbase/echo-server:functional-434682 --alsologtostderr                                                                   │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:10 UTC │ 04 Sep 25 21:10 UTC │
	│ addons  │ functional-434682 addons list                                                                                                                                   │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:10 UTC │ 04 Sep 25 21:10 UTC │
	│ addons  │ functional-434682 addons list -o json                                                                                                                           │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:10 UTC │ 04 Sep 25 21:10 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 21:09:16
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 21:09:16.264166  427426 out.go:360] Setting OutFile to fd 1 ...
	I0904 21:09:16.264410  427426 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:09:16.264414  427426 out.go:374] Setting ErrFile to fd 2...
	I0904 21:09:16.264417  427426 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:09:16.264591  427426 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-384635/.minikube/bin
	I0904 21:09:16.265152  427426 out.go:368] Setting JSON to false
	I0904 21:09:16.266101  427426 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10305,"bootTime":1757009851,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 21:09:16.266189  427426 start.go:140] virtualization: kvm guest
	I0904 21:09:16.268100  427426 out.go:179] * [functional-434682] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 21:09:16.269361  427426 out.go:179]   - MINIKUBE_LOCATION=21490
	I0904 21:09:16.269415  427426 notify.go:220] Checking for updates...
	I0904 21:09:16.271526  427426 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 21:09:16.272749  427426 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21490-384635/kubeconfig
	I0904 21:09:16.273866  427426 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-384635/.minikube
	I0904 21:09:16.275096  427426 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 21:09:16.276178  427426 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 21:09:16.277599  427426 config.go:182] Loaded profile config "functional-434682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 21:09:16.277680  427426 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 21:09:16.299288  427426 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 21:09:16.299352  427426 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 21:09:16.344853  427426 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:66 SystemTime:2025-09-04 21:09:16.336206612 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 21:09:16.344945  427426 docker.go:318] overlay module found
	I0904 21:09:16.346587  427426 out.go:179] * Using the docker driver based on existing profile
	I0904 21:09:16.347703  427426 start.go:304] selected driver: docker
	I0904 21:09:16.347710  427426 start.go:918] validating driver "docker" against &{Name:functional-434682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-434682 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 21:09:16.347772  427426 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 21:09:16.347842  427426 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 21:09:16.392249  427426 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:66 SystemTime:2025-09-04 21:09:16.383381297 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 21:09:16.393082  427426 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 21:09:16.393116  427426 cni.go:84] Creating CNI manager for ""
	I0904 21:09:16.393180  427426 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 21:09:16.393247  427426 start.go:348] cluster config:
	{Name:functional-434682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-434682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 21:09:16.394926  427426 out.go:179] * Starting "functional-434682" primary control-plane node in "functional-434682" cluster
	I0904 21:09:16.395999  427426 cache.go:123] Beginning downloading kic base image for docker with crio
	I0904 21:09:16.397013  427426 out.go:179] * Pulling base image v0.0.47-1756116447-21413 ...
	I0904 21:09:16.398022  427426 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 21:09:16.398051  427426 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21490-384635/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0904 21:09:16.398057  427426 cache.go:58] Caching tarball of preloaded images
	I0904 21:09:16.398127  427426 preload.go:172] Found /home/jenkins/minikube-integration/21490-384635/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0904 21:09:16.398133  427426 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0904 21:09:16.398142  427426 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 in local docker daemon
	I0904 21:09:16.398208  427426 profile.go:143] Saving config to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/config.json ...
	I0904 21:09:16.417640  427426 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 in local docker daemon, skipping pull
	I0904 21:09:16.417650  427426 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 exists in daemon, skipping load
	I0904 21:09:16.417664  427426 cache.go:232] Successfully downloaded all kic artifacts
	I0904 21:09:16.417698  427426 start.go:360] acquireMachinesLock for functional-434682: {Name:mke450e0eb1aabfc0780d2d8a3576f25b1b623c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 21:09:16.417752  427426 start.go:364] duration metric: took 40.779µs to acquireMachinesLock for "functional-434682"
	I0904 21:09:16.417766  427426 start.go:96] Skipping create...Using existing machine configuration
	I0904 21:09:16.417770  427426 fix.go:54] fixHost starting: 
	I0904 21:09:16.417979  427426 cli_runner.go:164] Run: docker container inspect functional-434682 --format={{.State.Status}}
	I0904 21:09:16.434349  427426 fix.go:112] recreateIfNeeded on functional-434682: state=Running err=<nil>
	W0904 21:09:16.434370  427426 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 21:09:16.435964  427426 out.go:252] * Updating the running docker "functional-434682" container ...
	I0904 21:09:16.435981  427426 machine.go:93] provisionDockerMachine start ...
	I0904 21:09:16.436049  427426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-434682
	I0904 21:09:16.452168  427426 main.go:141] libmachine: Using SSH client type: native
	I0904 21:09:16.452402  427426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33155 <nil> <nil>}
	I0904 21:09:16.452408  427426 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 21:09:16.568159  427426 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-434682
	
	I0904 21:09:16.568180  427426 ubuntu.go:182] provisioning hostname "functional-434682"
	I0904 21:09:16.568234  427426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-434682
	I0904 21:09:16.584895  427426 main.go:141] libmachine: Using SSH client type: native
	I0904 21:09:16.585116  427426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33155 <nil> <nil>}
	I0904 21:09:16.585123  427426 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-434682 && echo "functional-434682" | sudo tee /etc/hostname
	I0904 21:09:16.710891  427426 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-434682
	
	I0904 21:09:16.710969  427426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-434682
	I0904 21:09:16.727909  427426 main.go:141] libmachine: Using SSH client type: native
	I0904 21:09:16.728095  427426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33155 <nil> <nil>}
	I0904 21:09:16.728106  427426 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-434682' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-434682/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-434682' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 21:09:16.836637  427426 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 21:09:16.836656  427426 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21490-384635/.minikube CaCertPath:/home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21490-384635/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21490-384635/.minikube}
	I0904 21:09:16.836695  427426 ubuntu.go:190] setting up certificates
	I0904 21:09:16.836707  427426 provision.go:84] configureAuth start
	I0904 21:09:16.836777  427426 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-434682
	I0904 21:09:16.853624  427426 provision.go:143] copyHostCerts
	I0904 21:09:16.853687  427426 exec_runner.go:144] found /home/jenkins/minikube-integration/21490-384635/.minikube/ca.pem, removing ...
	I0904 21:09:16.853699  427426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21490-384635/.minikube/ca.pem
	I0904 21:09:16.853764  427426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21490-384635/.minikube/ca.pem (1078 bytes)
	I0904 21:09:16.853872  427426 exec_runner.go:144] found /home/jenkins/minikube-integration/21490-384635/.minikube/cert.pem, removing ...
	I0904 21:09:16.853877  427426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21490-384635/.minikube/cert.pem
	I0904 21:09:16.853904  427426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21490-384635/.minikube/cert.pem (1123 bytes)
	I0904 21:09:16.853987  427426 exec_runner.go:144] found /home/jenkins/minikube-integration/21490-384635/.minikube/key.pem, removing ...
	I0904 21:09:16.853991  427426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21490-384635/.minikube/key.pem
	I0904 21:09:16.854020  427426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21490-384635/.minikube/key.pem (1675 bytes)
	I0904 21:09:16.854092  427426 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21490-384635/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca-key.pem org=jenkins.functional-434682 san=[127.0.0.1 192.168.49.2 functional-434682 localhost minikube]
	I0904 21:09:16.895000  427426 provision.go:177] copyRemoteCerts
	I0904 21:09:16.895052  427426 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 21:09:16.895089  427426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-434682
	I0904 21:09:16.912206  427426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33155 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/functional-434682/id_rsa Username:docker}
	I0904 21:09:17.001338  427426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0904 21:09:17.023019  427426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0904 21:09:17.043859  427426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0904 21:09:17.064718  427426 provision.go:87] duration metric: took 228.000203ms to configureAuth
	I0904 21:09:17.064737  427426 ubuntu.go:206] setting minikube options for container-runtime
	I0904 21:09:17.064934  427426 config.go:182] Loaded profile config "functional-434682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 21:09:17.065025  427426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-434682
	I0904 21:09:17.081946  427426 main.go:141] libmachine: Using SSH client type: native
	I0904 21:09:17.082142  427426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33155 <nil> <nil>}
	I0904 21:09:17.082153  427426 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0904 21:09:22.407020  427426 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0904 21:09:22.407041  427426 machine.go:96] duration metric: took 5.971053302s to provisionDockerMachine
	I0904 21:09:22.407053  427426 start.go:293] postStartSetup for "functional-434682" (driver="docker")
	I0904 21:09:22.407066  427426 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 21:09:22.407139  427426 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 21:09:22.407171  427426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-434682
	I0904 21:09:22.424285  427426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33155 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/functional-434682/id_rsa Username:docker}
	I0904 21:09:22.508935  427426 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 21:09:22.511905  427426 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0904 21:09:22.511922  427426 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0904 21:09:22.511928  427426 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0904 21:09:22.511934  427426 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0904 21:09:22.511943  427426 filesync.go:126] Scanning /home/jenkins/minikube-integration/21490-384635/.minikube/addons for local assets ...
	I0904 21:09:22.511989  427426 filesync.go:126] Scanning /home/jenkins/minikube-integration/21490-384635/.minikube/files for local assets ...
	I0904 21:09:22.512052  427426 filesync.go:149] local asset: /home/jenkins/minikube-integration/21490-384635/.minikube/files/etc/ssl/certs/3883602.pem -> 3883602.pem in /etc/ssl/certs
	I0904 21:09:22.512117  427426 filesync.go:149] local asset: /home/jenkins/minikube-integration/21490-384635/.minikube/files/etc/test/nested/copy/388360/hosts -> hosts in /etc/test/nested/copy/388360
	I0904 21:09:22.512146  427426 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/388360
	I0904 21:09:22.519691  427426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/files/etc/ssl/certs/3883602.pem --> /etc/ssl/certs/3883602.pem (1708 bytes)
	I0904 21:09:22.540343  427426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/files/etc/test/nested/copy/388360/hosts --> /etc/test/nested/copy/388360/hosts (40 bytes)
	I0904 21:09:22.560706  427426 start.go:296] duration metric: took 153.640509ms for postStartSetup
	I0904 21:09:22.560791  427426 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 21:09:22.560841  427426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-434682
	I0904 21:09:22.577647  427426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33155 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/functional-434682/id_rsa Username:docker}
	I0904 21:09:22.657377  427426 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0904 21:09:22.661446  427426 fix.go:56] duration metric: took 6.243670156s for fixHost
	I0904 21:09:22.661459  427426 start.go:83] releasing machines lock for "functional-434682", held for 6.243702066s
	I0904 21:09:22.661525  427426 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-434682
	I0904 21:09:22.677730  427426 ssh_runner.go:195] Run: cat /version.json
	I0904 21:09:22.677761  427426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-434682
	I0904 21:09:22.677842  427426 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 21:09:22.677889  427426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-434682
	I0904 21:09:22.694751  427426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33155 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/functional-434682/id_rsa Username:docker}
	I0904 21:09:22.695979  427426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33155 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/functional-434682/id_rsa Username:docker}
	I0904 21:09:22.867530  427426 ssh_runner.go:195] Run: systemctl --version
	I0904 21:09:22.871629  427426 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0904 21:09:23.008996  427426 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0904 21:09:23.013322  427426 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 21:09:23.021420  427426 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0904 21:09:23.021476  427426 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 21:09:23.029090  427426 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0904 21:09:23.029101  427426 start.go:495] detecting cgroup driver to use...
	I0904 21:09:23.029130  427426 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0904 21:09:23.029172  427426 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 21:09:23.040230  427426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 21:09:23.049958  427426 docker.go:218] disabling cri-docker service (if available) ...
	I0904 21:09:23.049991  427426 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0904 21:09:23.061012  427426 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0904 21:09:23.070630  427426 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0904 21:09:23.187701  427426 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0904 21:09:23.286605  427426 docker.go:234] disabling docker service ...
	I0904 21:09:23.286659  427426 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0904 21:09:23.297766  427426 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0904 21:09:23.307508  427426 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0904 21:09:23.409961  427426 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0904 21:09:23.516023  427426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 21:09:23.526283  427426 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 21:09:23.541041  427426 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0904 21:09:23.541089  427426 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:09:23.549725  427426 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0904 21:09:23.549764  427426 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:09:23.558190  427426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:09:23.566383  427426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:09:23.574417  427426 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 21:09:23.581962  427426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:09:23.590504  427426 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:09:23.598316  427426 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:09:23.606521  427426 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 21:09:23.613565  427426 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 21:09:23.620646  427426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 21:09:23.734347  427426 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0904 21:09:32.270975  427426 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.536597956s)
	I0904 21:09:32.270998  427426 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0904 21:09:32.271050  427426 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0904 21:09:32.274588  427426 start.go:563] Will wait 60s for crictl version
	I0904 21:09:32.274633  427426 ssh_runner.go:195] Run: which crictl
	I0904 21:09:32.277818  427426 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 21:09:32.310247  427426 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0904 21:09:32.310321  427426 ssh_runner.go:195] Run: crio --version
	I0904 21:09:32.343784  427426 ssh_runner.go:195] Run: crio --version
	I0904 21:09:32.377109  427426 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0904 21:09:32.378261  427426 cli_runner.go:164] Run: docker network inspect functional-434682 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0904 21:09:32.394323  427426 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0904 21:09:32.399131  427426 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0904 21:09:32.400164  427426 kubeadm.go:875] updating cluster {Name:functional-434682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-434682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0904 21:09:32.400267  427426 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 21:09:32.400313  427426 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 21:09:32.438885  427426 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 21:09:32.438896  427426 crio.go:433] Images already preloaded, skipping extraction
	I0904 21:09:32.438943  427426 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 21:09:32.471724  427426 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 21:09:32.471782  427426 cache_images.go:85] Images are preloaded, skipping loading
	I0904 21:09:32.471802  427426 kubeadm.go:926] updating node { 192.168.49.2 8441 v1.34.0 crio true true} ...
	I0904 21:09:32.471912  427426 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-434682 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:functional-434682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0904 21:09:32.471981  427426 ssh_runner.go:195] Run: crio config
	I0904 21:09:32.514729  427426 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0904 21:09:32.514749  427426 cni.go:84] Creating CNI manager for ""
	I0904 21:09:32.514759  427426 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 21:09:32.514767  427426 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0904 21:09:32.514792  427426 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-434682 NodeName:functional-434682 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0904 21:09:32.514897  427426 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-434682"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0904 21:09:32.514949  427426 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0904 21:09:32.523132  427426 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 21:09:32.523181  427426 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0904 21:09:32.530952  427426 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0904 21:09:32.546824  427426 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 21:09:32.562300  427426 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I0904 21:09:32.577636  427426 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0904 21:09:32.580742  427426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 21:09:32.683105  427426 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 21:09:32.693507  427426 certs.go:68] Setting up /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682 for IP: 192.168.49.2
	I0904 21:09:32.693530  427426 certs.go:194] generating shared ca certs ...
	I0904 21:09:32.693552  427426 certs.go:226] acquiring lock for ca certs: {Name:mkab35eaf89c739a644dd45428dbbd1b30c489d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 21:09:32.693719  427426 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21490-384635/.minikube/ca.key
	I0904 21:09:32.693750  427426 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21490-384635/.minikube/proxy-client-ca.key
	I0904 21:09:32.693756  427426 certs.go:256] generating profile certs ...
	I0904 21:09:32.693828  427426 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/client.key
	I0904 21:09:32.693868  427426 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/apiserver.key.1937a050
	I0904 21:09:32.693899  427426 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/proxy-client.key
	I0904 21:09:32.693997  427426 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/388360.pem (1338 bytes)
	W0904 21:09:32.694020  427426 certs.go:480] ignoring /home/jenkins/minikube-integration/21490-384635/.minikube/certs/388360_empty.pem, impossibly tiny 0 bytes
	I0904 21:09:32.694025  427426 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca-key.pem (1679 bytes)
	I0904 21:09:32.694044  427426 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem (1078 bytes)
	I0904 21:09:32.694062  427426 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/cert.pem (1123 bytes)
	I0904 21:09:32.694078  427426 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/key.pem (1675 bytes)
	I0904 21:09:32.694109  427426 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/files/etc/ssl/certs/3883602.pem (1708 bytes)
	I0904 21:09:32.694680  427426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 21:09:32.715204  427426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0904 21:09:32.736001  427426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 21:09:32.756149  427426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0904 21:09:32.776848  427426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0904 21:09:32.797461  427426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0904 21:09:32.817893  427426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 21:09:32.838636  427426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0904 21:09:32.859190  427426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 21:09:32.878986  427426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/certs/388360.pem --> /usr/share/ca-certificates/388360.pem (1338 bytes)
	I0904 21:09:32.899262  427426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/files/etc/ssl/certs/3883602.pem --> /usr/share/ca-certificates/3883602.pem (1708 bytes)
	I0904 21:09:32.919308  427426 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0904 21:09:32.934457  427426 ssh_runner.go:195] Run: openssl version
	I0904 21:09:32.939071  427426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 21:09:32.947327  427426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 21:09:32.950337  427426 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  4 20:56 /usr/share/ca-certificates/minikubeCA.pem
	I0904 21:09:32.950375  427426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 21:09:32.956215  427426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 21:09:32.963763  427426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/388360.pem && ln -fs /usr/share/ca-certificates/388360.pem /etc/ssl/certs/388360.pem"
	I0904 21:09:32.971823  427426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/388360.pem
	I0904 21:09:32.974813  427426 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  4 21:07 /usr/share/ca-certificates/388360.pem
	I0904 21:09:32.974849  427426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/388360.pem
	I0904 21:09:32.981029  427426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/388360.pem /etc/ssl/certs/51391683.0"
	I0904 21:09:32.988531  427426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3883602.pem && ln -fs /usr/share/ca-certificates/3883602.pem /etc/ssl/certs/3883602.pem"
	I0904 21:09:32.996469  427426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3883602.pem
	I0904 21:09:32.999387  427426 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  4 21:07 /usr/share/ca-certificates/3883602.pem
	I0904 21:09:32.999417  427426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3883602.pem
	I0904 21:09:33.005494  427426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3883602.pem /etc/ssl/certs/3ec20f2e.0"
	I0904 21:09:33.013035  427426 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 21:09:33.015978  427426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0904 21:09:33.021883  427426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0904 21:09:33.027556  427426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0904 21:09:33.033125  427426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0904 21:09:33.038967  427426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0904 21:09:33.044497  427426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0904 21:09:33.050050  427426 kubeadm.go:392] StartCluster: {Name:functional-434682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-434682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 21:09:33.050142  427426 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0904 21:09:33.050172  427426 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0904 21:09:33.083435  427426 cri.go:89] found id: "9aae164b6b622e57cfa63c0d10780b1d415a846e61eef71f81fb632601cbf077"
	I0904 21:09:33.083450  427426 cri.go:89] found id: "4e0bb8dd41e21f2833e9485ae206c36d37cd22fc7e45e93b5edac3a2971b5f0d"
	I0904 21:09:33.083454  427426 cri.go:89] found id: "d1bf470c950376569202b921adaecbf97801a2ebfe9fffcfc07500259775d103"
	I0904 21:09:33.083457  427426 cri.go:89] found id: "0fd149fb30af74d582046fc60a9f80ce2cd48cec39f94992069066c32e3c7cb2"
	I0904 21:09:33.083460  427426 cri.go:89] found id: "147b1660e0ddf97548b185775163fa70312e80a72b723002562b6c48722dc082"
	I0904 21:09:33.083464  427426 cri.go:89] found id: "64aaae8d657daf42d90854673aa7ce9152f3f2314bae53b22f99da336581d403"
	I0904 21:09:33.083467  427426 cri.go:89] found id: "f74416e4a5adc00b9607edad20bae2daa3f0875f93376105c1a35c9e998b2392"
	I0904 21:09:33.083469  427426 cri.go:89] found id: "753221e723ff279278be4081dd108fed0fe299d51e26bf89fcbd7b19210b8ee2"
	I0904 21:09:33.083472  427426 cri.go:89] found id: "279dbefad36138dca3e8b3083d9738f2befb8c0fdb16fc5221ce1d1045032b84"
	I0904 21:09:33.083480  427426 cri.go:89] found id: "f9d74c2b858ae49842e0adccba71f5370114400fb309630986703d1ce2392fd9"
	I0904 21:09:33.083483  427426 cri.go:89] found id: "bae7f7748e3ed3e15e34412a70e8b212313727408eca9300328da268b231e549"
	I0904 21:09:33.083486  427426 cri.go:89] found id: "92502b10b160b6d76312e20c0abc179495a571aaec51a12d27cda5d819bb9059"
	I0904 21:09:33.083504  427426 cri.go:89] found id: "e9297e068100c821ab8006290e2bc83bc7ba46656546d2bf1370e0f93bfe6b1d"
	I0904 21:09:33.083507  427426 cri.go:89] found id: "077927d2e257a157021d4329d81ac341e9429f2909c4002d391942aede49e58a"
	I0904 21:09:33.083509  427426 cri.go:89] found id: "c62a7ac4db3db9efd667d8c912812c5351ff0008c244007fff704b49c2252a3c"
	I0904 21:09:33.083513  427426 cri.go:89] found id: "c81776aca109af6344df8f2c7479e7584e5a25c5c36f58ee94cffee56b350099"
	I0904 21:09:33.083515  427426 cri.go:89] found id: ""
	I0904 21:09:33.083560  427426 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-434682 -n functional-434682
helpers_test.go:269: (dbg) Run:  kubectl --context functional-434682 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-8n82x hello-node-connect-7d85dfc575-84wxx mysql-5bb876957f-wzh2r nginx-svc sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-434682 describe pod hello-node-75c85bcc94-8n82x hello-node-connect-7d85dfc575-84wxx mysql-5bb876957f-wzh2r nginx-svc sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-434682 describe pod hello-node-75c85bcc94-8n82x hello-node-connect-7d85dfc575-84wxx mysql-5bb876957f-wzh2r nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-8n82x
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-434682/192.168.49.2
	Start Time:       Thu, 04 Sep 2025 21:15:40 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6kdmp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6kdmp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  31s   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-8n82x to functional-434682
	  Normal  Pulling    30s   kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-84wxx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-434682/192.168.49.2
	Start Time:       Thu, 04 Sep 2025 21:10:08 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zbxg8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zbxg8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  6m3s                default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-84wxx to functional-434682
	  Warning  Failed     73s (x3 over 5m2s)  kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     73s (x3 over 5m2s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    37s (x5 over 5m2s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     37s (x5 over 5m2s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    25s (x4 over 6m3s)  kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-wzh2r
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-434682/192.168.49.2
	Start Time:       Thu, 04 Sep 2025 21:10:02 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hc4jr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hc4jr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m9s                 default-scheduler  Successfully assigned default/mysql-5bb876957f-wzh2r to functional-434682
	  Warning  Failed     5m33s                kubelet            Failed to pull image "docker.io/mysql:5.7": initializing source docker://mysql:5.7: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m47s                kubelet            Failed to pull image "docker.io/mysql:5.7": loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    96s (x5 over 5m32s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     96s (x5 over 5m32s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    81s (x4 over 6m9s)   kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     12s (x4 over 5m33s)  kubelet            Error: ErrImagePull
	  Warning  Failed     12s (x2 over 2m15s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-434682/192.168.49.2
	Start Time:       Thu, 04 Sep 2025 21:10:04 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zhxrs (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zhxrs:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m7s                  default-scheduler  Successfully assigned default/nginx-svc to functional-434682
	  Warning  Failed     3m16s (x2 over 5m2s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     73s (x3 over 5m2s)    kubelet            Error: ErrImagePull
	  Warning  Failed     73s                   kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    48s (x4 over 5m2s)    kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     48s (x4 over 5m2s)    kubelet            Error: ImagePullBackOff
	  Normal   Pulling    33s (x4 over 6m7s)    kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-434682/192.168.49.2
	Start Time:       Thu, 04 Sep 2025 21:10:08 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bx7br (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-bx7br:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m3s                 default-scheduler  Successfully assigned default/sp-pod to functional-434682
	  Warning  Failed     43s (x3 over 4m32s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     43s (x3 over 4m32s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    13s (x4 over 4m31s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     13s (x4 over 4m31s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    0s (x4 over 6m2s)    kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (367.80s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-434682 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-wzh2r" [a63863a4-7fbe-4d12-b15a-fcfb930c1a96] Pending
helpers_test.go:352: "mysql-5bb876957f-wzh2r" [a63863a4-7fbe-4d12-b15a-fcfb930c1a96] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
functional_test.go:1804: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-434682 -n functional-434682
functional_test.go:1804: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-09-04 21:20:02.297173074 +0000 UTC m=+1479.140775628
functional_test.go:1804: (dbg) Run:  kubectl --context functional-434682 describe po mysql-5bb876957f-wzh2r -n default
functional_test.go:1804: (dbg) kubectl --context functional-434682 describe po mysql-5bb876957f-wzh2r -n default:
Name:             mysql-5bb876957f-wzh2r
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-434682/192.168.49.2
Start Time:       Thu, 04 Sep 2025 21:10:02 +0000
Labels:           app=mysql
pod-template-hash=5bb876957f
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/mysql-5bb876957f
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hc4jr (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-hc4jr:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/mysql-5bb876957f-wzh2r to functional-434682
Warning  Failed     9m24s                 kubelet            Failed to pull image "docker.io/mysql:5.7": initializing source docker://mysql:5.7: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     7m38s                 kubelet            Failed to pull image "docker.io/mysql:5.7": loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    2m32s (x5 over 10m)   kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     119s (x5 over 9m24s)  kubelet            Error: ErrImagePull
Warning  Failed     119s (x3 over 6m6s)   kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     51s (x16 over 9m23s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    11s (x19 over 9m23s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
functional_test.go:1804: (dbg) Run:  kubectl --context functional-434682 logs mysql-5bb876957f-wzh2r -n default
functional_test.go:1804: (dbg) Non-zero exit: kubectl --context functional-434682 logs mysql-5bb876957f-wzh2r -n default: exit status 1 (66.164617ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-5bb876957f-wzh2r" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1804: kubectl --context functional-434682 logs mysql-5bb876957f-wzh2r -n default: exit status 1
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-434682
helpers_test.go:243: (dbg) docker inspect functional-434682:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e",
	        "Created": "2025-09-04T21:07:38.362965102Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 421064,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-04T21:07:38.3914292Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:26724cb1013292a869503ea8c35fa5a932e6ec3e4e85e318411373ae6a24478b",
	        "ResolvConfPath": "/var/lib/docker/containers/c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e/hostname",
	        "HostsPath": "/var/lib/docker/containers/c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e/hosts",
	        "LogPath": "/var/lib/docker/containers/c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e/c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e-json.log",
	        "Name": "/functional-434682",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-434682:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-434682",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c103d70542804cdfe6ca0955fdeb9f98426d3f47b6777015f0703c39be550e2e",
	                "LowerDir": "/var/lib/docker/overlay2/eb0bbf1c5f29be9a012439922fbd83d9a4ab23f661af8d9d69d65583c4b8f686-init/diff:/var/lib/docker/overlay2/fcc29a201369e5fb579bfafdafe90606e5b86bd2f4b52beb7f6a97f7e955214d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eb0bbf1c5f29be9a012439922fbd83d9a4ab23f661af8d9d69d65583c4b8f686/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eb0bbf1c5f29be9a012439922fbd83d9a4ab23f661af8d9d69d65583c4b8f686/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eb0bbf1c5f29be9a012439922fbd83d9a4ab23f661af8d9d69d65583c4b8f686/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-434682",
	                "Source": "/var/lib/docker/volumes/functional-434682/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-434682",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-434682",
	                "name.minikube.sigs.k8s.io": "functional-434682",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5a1ee1974cafddfa91d00d9aacf8ecbbf723cb04b47fdc840a7a8d178cf57558",
	            "SandboxKey": "/var/run/docker/netns/5a1ee1974caf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33155"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33156"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33157"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-434682": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5e:eb:a6:0f:f0:cd",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e2d7bc8acf0e9f0624cc76f4cbe69fbd7f4637588b37e979a792472035792fd9",
	                    "EndpointID": "9900ed97e30e5355fe09aab9ccab90615da0bf4734544f78893d6f734e005f15",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-434682",
	                        "c103d7054280"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-434682 -n functional-434682
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-434682 logs -n 25: (1.358773815s)
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons    │ functional-434682 addons list                                                                                                     │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:10 UTC │ 04 Sep 25 21:10 UTC │
	│ addons    │ functional-434682 addons list -o json                                                                                             │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:10 UTC │ 04 Sep 25 21:10 UTC │
	│ mount     │ -p functional-434682 /tmp/TestFunctionalparallelMountCmdany-port1577728095/001:/mount-9p --alsologtostderr -v=1                   │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:16 UTC │                     │
	│ ssh       │ functional-434682 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:16 UTC │                     │
	│ ssh       │ functional-434682 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:16 UTC │ 04 Sep 25 21:16 UTC │
	│ ssh       │ functional-434682 ssh -- ls -la /mount-9p                                                                                         │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:16 UTC │ 04 Sep 25 21:16 UTC │
	│ ssh       │ functional-434682 ssh cat /mount-9p/test-1757020572394914230                                                                      │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:16 UTC │ 04 Sep 25 21:16 UTC │
	│ ssh       │ functional-434682 ssh stat /mount-9p/created-by-test                                                                              │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │ 04 Sep 25 21:17 UTC │
	│ ssh       │ functional-434682 ssh stat /mount-9p/created-by-pod                                                                               │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │ 04 Sep 25 21:17 UTC │
	│ ssh       │ functional-434682 ssh sudo umount -f /mount-9p                                                                                    │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │ 04 Sep 25 21:17 UTC │
	│ ssh       │ functional-434682 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │                     │
	│ mount     │ -p functional-434682 /tmp/TestFunctionalparallelMountCmdspecific-port2055463540/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │                     │
	│ ssh       │ functional-434682 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │ 04 Sep 25 21:17 UTC │
	│ ssh       │ functional-434682 ssh -- ls -la /mount-9p                                                                                         │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │ 04 Sep 25 21:17 UTC │
	│ ssh       │ functional-434682 ssh sudo umount -f /mount-9p                                                                                    │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │                     │
	│ mount     │ -p functional-434682 /tmp/TestFunctionalparallelMountCmdVerifyCleanup251258155/001:/mount1 --alsologtostderr -v=1                 │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │                     │
	│ ssh       │ functional-434682 ssh findmnt -T /mount1                                                                                          │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │                     │
	│ mount     │ -p functional-434682 /tmp/TestFunctionalparallelMountCmdVerifyCleanup251258155/001:/mount2 --alsologtostderr -v=1                 │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │                     │
	│ mount     │ -p functional-434682 /tmp/TestFunctionalparallelMountCmdVerifyCleanup251258155/001:/mount3 --alsologtostderr -v=1                 │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │                     │
	│ ssh       │ functional-434682 ssh findmnt -T /mount1                                                                                          │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │ 04 Sep 25 21:17 UTC │
	│ ssh       │ functional-434682 ssh findmnt -T /mount2                                                                                          │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │ 04 Sep 25 21:17 UTC │
	│ ssh       │ functional-434682 ssh findmnt -T /mount3                                                                                          │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │ 04 Sep 25 21:17 UTC │
	│ mount     │ -p functional-434682 --kill=true                                                                                                  │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │                     │
	│ license   │                                                                                                                                   │ minikube          │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │ 04 Sep 25 21:17 UTC │
	│ dashboard │ --url --port 36195 -p functional-434682 --alsologtostderr -v=1                                                                    │ functional-434682 │ jenkins │ v1.36.0 │ 04 Sep 25 21:17 UTC │                     │
	└───────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 21:09:16
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 21:09:16.264166  427426 out.go:360] Setting OutFile to fd 1 ...
	I0904 21:09:16.264410  427426 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:09:16.264414  427426 out.go:374] Setting ErrFile to fd 2...
	I0904 21:09:16.264417  427426 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:09:16.264591  427426 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-384635/.minikube/bin
	I0904 21:09:16.265152  427426 out.go:368] Setting JSON to false
	I0904 21:09:16.266101  427426 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10305,"bootTime":1757009851,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 21:09:16.266189  427426 start.go:140] virtualization: kvm guest
	I0904 21:09:16.268100  427426 out.go:179] * [functional-434682] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 21:09:16.269361  427426 out.go:179]   - MINIKUBE_LOCATION=21490
	I0904 21:09:16.269415  427426 notify.go:220] Checking for updates...
	I0904 21:09:16.271526  427426 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 21:09:16.272749  427426 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21490-384635/kubeconfig
	I0904 21:09:16.273866  427426 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-384635/.minikube
	I0904 21:09:16.275096  427426 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 21:09:16.276178  427426 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 21:09:16.277599  427426 config.go:182] Loaded profile config "functional-434682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 21:09:16.277680  427426 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 21:09:16.299288  427426 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 21:09:16.299352  427426 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 21:09:16.344853  427426 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:66 SystemTime:2025-09-04 21:09:16.336206612 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 21:09:16.344945  427426 docker.go:318] overlay module found
	I0904 21:09:16.346587  427426 out.go:179] * Using the docker driver based on existing profile
	I0904 21:09:16.347703  427426 start.go:304] selected driver: docker
	I0904 21:09:16.347710  427426 start.go:918] validating driver "docker" against &{Name:functional-434682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-434682 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 21:09:16.347772  427426 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 21:09:16.347842  427426 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 21:09:16.392249  427426 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:66 SystemTime:2025-09-04 21:09:16.383381297 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 21:09:16.393082  427426 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 21:09:16.393116  427426 cni.go:84] Creating CNI manager for ""
	I0904 21:09:16.393180  427426 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 21:09:16.393247  427426 start.go:348] cluster config:
	{Name:functional-434682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-434682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 21:09:16.394926  427426 out.go:179] * Starting "functional-434682" primary control-plane node in "functional-434682" cluster
	I0904 21:09:16.395999  427426 cache.go:123] Beginning downloading kic base image for docker with crio
	I0904 21:09:16.397013  427426 out.go:179] * Pulling base image v0.0.47-1756116447-21413 ...
	I0904 21:09:16.398022  427426 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 21:09:16.398051  427426 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21490-384635/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0904 21:09:16.398057  427426 cache.go:58] Caching tarball of preloaded images
	I0904 21:09:16.398127  427426 preload.go:172] Found /home/jenkins/minikube-integration/21490-384635/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0904 21:09:16.398133  427426 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0904 21:09:16.398142  427426 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 in local docker daemon
	I0904 21:09:16.398208  427426 profile.go:143] Saving config to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/config.json ...
	I0904 21:09:16.417640  427426 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 in local docker daemon, skipping pull
	I0904 21:09:16.417650  427426 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 exists in daemon, skipping load
	I0904 21:09:16.417664  427426 cache.go:232] Successfully downloaded all kic artifacts
	I0904 21:09:16.417698  427426 start.go:360] acquireMachinesLock for functional-434682: {Name:mke450e0eb1aabfc0780d2d8a3576f25b1b623c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 21:09:16.417752  427426 start.go:364] duration metric: took 40.779µs to acquireMachinesLock for "functional-434682"
	I0904 21:09:16.417766  427426 start.go:96] Skipping create...Using existing machine configuration
	I0904 21:09:16.417770  427426 fix.go:54] fixHost starting: 
	I0904 21:09:16.417979  427426 cli_runner.go:164] Run: docker container inspect functional-434682 --format={{.State.Status}}
	I0904 21:09:16.434349  427426 fix.go:112] recreateIfNeeded on functional-434682: state=Running err=<nil>
	W0904 21:09:16.434370  427426 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 21:09:16.435964  427426 out.go:252] * Updating the running docker "functional-434682" container ...
	I0904 21:09:16.435981  427426 machine.go:93] provisionDockerMachine start ...
	I0904 21:09:16.436049  427426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-434682
	I0904 21:09:16.452168  427426 main.go:141] libmachine: Using SSH client type: native
	I0904 21:09:16.452402  427426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33155 <nil> <nil>}
	I0904 21:09:16.452408  427426 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 21:09:16.568159  427426 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-434682
	
	I0904 21:09:16.568180  427426 ubuntu.go:182] provisioning hostname "functional-434682"
	I0904 21:09:16.568234  427426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-434682
	I0904 21:09:16.584895  427426 main.go:141] libmachine: Using SSH client type: native
	I0904 21:09:16.585116  427426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33155 <nil> <nil>}
	I0904 21:09:16.585123  427426 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-434682 && echo "functional-434682" | sudo tee /etc/hostname
	I0904 21:09:16.710891  427426 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-434682
	
	I0904 21:09:16.710969  427426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-434682
	I0904 21:09:16.727909  427426 main.go:141] libmachine: Using SSH client type: native
	I0904 21:09:16.728095  427426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33155 <nil> <nil>}
	I0904 21:09:16.728106  427426 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-434682' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-434682/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-434682' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 21:09:16.836637  427426 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 21:09:16.836656  427426 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21490-384635/.minikube CaCertPath:/home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21490-384635/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21490-384635/.minikube}
	I0904 21:09:16.836695  427426 ubuntu.go:190] setting up certificates
	I0904 21:09:16.836707  427426 provision.go:84] configureAuth start
	I0904 21:09:16.836777  427426 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-434682
	I0904 21:09:16.853624  427426 provision.go:143] copyHostCerts
	I0904 21:09:16.853687  427426 exec_runner.go:144] found /home/jenkins/minikube-integration/21490-384635/.minikube/ca.pem, removing ...
	I0904 21:09:16.853699  427426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21490-384635/.minikube/ca.pem
	I0904 21:09:16.853764  427426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21490-384635/.minikube/ca.pem (1078 bytes)
	I0904 21:09:16.853872  427426 exec_runner.go:144] found /home/jenkins/minikube-integration/21490-384635/.minikube/cert.pem, removing ...
	I0904 21:09:16.853877  427426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21490-384635/.minikube/cert.pem
	I0904 21:09:16.853904  427426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21490-384635/.minikube/cert.pem (1123 bytes)
	I0904 21:09:16.853987  427426 exec_runner.go:144] found /home/jenkins/minikube-integration/21490-384635/.minikube/key.pem, removing ...
	I0904 21:09:16.853991  427426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21490-384635/.minikube/key.pem
	I0904 21:09:16.854020  427426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21490-384635/.minikube/key.pem (1675 bytes)
	I0904 21:09:16.854092  427426 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21490-384635/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca-key.pem org=jenkins.functional-434682 san=[127.0.0.1 192.168.49.2 functional-434682 localhost minikube]
	I0904 21:09:16.895000  427426 provision.go:177] copyRemoteCerts
	I0904 21:09:16.895052  427426 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 21:09:16.895089  427426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-434682
	I0904 21:09:16.912206  427426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33155 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/functional-434682/id_rsa Username:docker}
	I0904 21:09:17.001338  427426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0904 21:09:17.023019  427426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0904 21:09:17.043859  427426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0904 21:09:17.064718  427426 provision.go:87] duration metric: took 228.000203ms to configureAuth
	I0904 21:09:17.064737  427426 ubuntu.go:206] setting minikube options for container-runtime
	I0904 21:09:17.064934  427426 config.go:182] Loaded profile config "functional-434682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 21:09:17.065025  427426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-434682
	I0904 21:09:17.081946  427426 main.go:141] libmachine: Using SSH client type: native
	I0904 21:09:17.082142  427426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33155 <nil> <nil>}
	I0904 21:09:17.082153  427426 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0904 21:09:22.407020  427426 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0904 21:09:22.407041  427426 machine.go:96] duration metric: took 5.971053302s to provisionDockerMachine
	I0904 21:09:22.407053  427426 start.go:293] postStartSetup for "functional-434682" (driver="docker")
	I0904 21:09:22.407066  427426 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 21:09:22.407139  427426 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 21:09:22.407171  427426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-434682
	I0904 21:09:22.424285  427426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33155 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/functional-434682/id_rsa Username:docker}
	I0904 21:09:22.508935  427426 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 21:09:22.511905  427426 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0904 21:09:22.511922  427426 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0904 21:09:22.511928  427426 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0904 21:09:22.511934  427426 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0904 21:09:22.511943  427426 filesync.go:126] Scanning /home/jenkins/minikube-integration/21490-384635/.minikube/addons for local assets ...
	I0904 21:09:22.511989  427426 filesync.go:126] Scanning /home/jenkins/minikube-integration/21490-384635/.minikube/files for local assets ...
	I0904 21:09:22.512052  427426 filesync.go:149] local asset: /home/jenkins/minikube-integration/21490-384635/.minikube/files/etc/ssl/certs/3883602.pem -> 3883602.pem in /etc/ssl/certs
	I0904 21:09:22.512117  427426 filesync.go:149] local asset: /home/jenkins/minikube-integration/21490-384635/.minikube/files/etc/test/nested/copy/388360/hosts -> hosts in /etc/test/nested/copy/388360
	I0904 21:09:22.512146  427426 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/388360
	I0904 21:09:22.519691  427426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/files/etc/ssl/certs/3883602.pem --> /etc/ssl/certs/3883602.pem (1708 bytes)
	I0904 21:09:22.540343  427426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/files/etc/test/nested/copy/388360/hosts --> /etc/test/nested/copy/388360/hosts (40 bytes)
	I0904 21:09:22.560706  427426 start.go:296] duration metric: took 153.640509ms for postStartSetup
	I0904 21:09:22.560791  427426 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 21:09:22.560841  427426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-434682
	I0904 21:09:22.577647  427426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33155 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/functional-434682/id_rsa Username:docker}
	I0904 21:09:22.657377  427426 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0904 21:09:22.661446  427426 fix.go:56] duration metric: took 6.243670156s for fixHost
	I0904 21:09:22.661459  427426 start.go:83] releasing machines lock for "functional-434682", held for 6.243702066s
	I0904 21:09:22.661525  427426 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-434682
	I0904 21:09:22.677730  427426 ssh_runner.go:195] Run: cat /version.json
	I0904 21:09:22.677761  427426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-434682
	I0904 21:09:22.677842  427426 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 21:09:22.677889  427426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-434682
	I0904 21:09:22.694751  427426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33155 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/functional-434682/id_rsa Username:docker}
	I0904 21:09:22.695979  427426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33155 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/functional-434682/id_rsa Username:docker}
	I0904 21:09:22.867530  427426 ssh_runner.go:195] Run: systemctl --version
	I0904 21:09:22.871629  427426 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0904 21:09:23.008996  427426 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0904 21:09:23.013322  427426 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 21:09:23.021420  427426 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0904 21:09:23.021476  427426 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 21:09:23.029090  427426 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0904 21:09:23.029101  427426 start.go:495] detecting cgroup driver to use...
	I0904 21:09:23.029130  427426 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0904 21:09:23.029172  427426 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 21:09:23.040230  427426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 21:09:23.049958  427426 docker.go:218] disabling cri-docker service (if available) ...
	I0904 21:09:23.049991  427426 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0904 21:09:23.061012  427426 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0904 21:09:23.070630  427426 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0904 21:09:23.187701  427426 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0904 21:09:23.286605  427426 docker.go:234] disabling docker service ...
	I0904 21:09:23.286659  427426 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0904 21:09:23.297766  427426 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0904 21:09:23.307508  427426 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0904 21:09:23.409961  427426 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0904 21:09:23.516023  427426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 21:09:23.526283  427426 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 21:09:23.541041  427426 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0904 21:09:23.541089  427426 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:09:23.549725  427426 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0904 21:09:23.549764  427426 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:09:23.558190  427426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:09:23.566383  427426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:09:23.574417  427426 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 21:09:23.581962  427426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:09:23.590504  427426 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:09:23.598316  427426 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:09:23.606521  427426 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 21:09:23.613565  427426 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 21:09:23.620646  427426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 21:09:23.734347  427426 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0904 21:09:32.270975  427426 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.536597956s)
	I0904 21:09:32.270998  427426 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0904 21:09:32.271050  427426 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0904 21:09:32.274588  427426 start.go:563] Will wait 60s for crictl version
	I0904 21:09:32.274633  427426 ssh_runner.go:195] Run: which crictl
	I0904 21:09:32.277818  427426 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 21:09:32.310247  427426 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0904 21:09:32.310321  427426 ssh_runner.go:195] Run: crio --version
	I0904 21:09:32.343784  427426 ssh_runner.go:195] Run: crio --version
	I0904 21:09:32.377109  427426 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0904 21:09:32.378261  427426 cli_runner.go:164] Run: docker network inspect functional-434682 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0904 21:09:32.394323  427426 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0904 21:09:32.399131  427426 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0904 21:09:32.400164  427426 kubeadm.go:875] updating cluster {Name:functional-434682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-434682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0904 21:09:32.400267  427426 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 21:09:32.400313  427426 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 21:09:32.438885  427426 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 21:09:32.438896  427426 crio.go:433] Images already preloaded, skipping extraction
	I0904 21:09:32.438943  427426 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 21:09:32.471724  427426 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 21:09:32.471782  427426 cache_images.go:85] Images are preloaded, skipping loading
	I0904 21:09:32.471802  427426 kubeadm.go:926] updating node { 192.168.49.2 8441 v1.34.0 crio true true} ...
	I0904 21:09:32.471912  427426 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-434682 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:functional-434682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0904 21:09:32.471981  427426 ssh_runner.go:195] Run: crio config
	I0904 21:09:32.514729  427426 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0904 21:09:32.514749  427426 cni.go:84] Creating CNI manager for ""
	I0904 21:09:32.514759  427426 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 21:09:32.514767  427426 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0904 21:09:32.514792  427426 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-434682 NodeName:functional-434682 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0904 21:09:32.514897  427426 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-434682"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0904 21:09:32.514949  427426 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0904 21:09:32.523132  427426 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 21:09:32.523181  427426 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0904 21:09:32.530952  427426 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0904 21:09:32.546824  427426 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 21:09:32.562300  427426 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I0904 21:09:32.577636  427426 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0904 21:09:32.580742  427426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 21:09:32.683105  427426 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 21:09:32.693507  427426 certs.go:68] Setting up /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682 for IP: 192.168.49.2
	I0904 21:09:32.693530  427426 certs.go:194] generating shared ca certs ...
	I0904 21:09:32.693552  427426 certs.go:226] acquiring lock for ca certs: {Name:mkab35eaf89c739a644dd45428dbbd1b30c489d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 21:09:32.693719  427426 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21490-384635/.minikube/ca.key
	I0904 21:09:32.693750  427426 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21490-384635/.minikube/proxy-client-ca.key
	I0904 21:09:32.693756  427426 certs.go:256] generating profile certs ...
	I0904 21:09:32.693828  427426 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/client.key
	I0904 21:09:32.693868  427426 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/apiserver.key.1937a050
	I0904 21:09:32.693899  427426 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/proxy-client.key
	I0904 21:09:32.693997  427426 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/388360.pem (1338 bytes)
	W0904 21:09:32.694020  427426 certs.go:480] ignoring /home/jenkins/minikube-integration/21490-384635/.minikube/certs/388360_empty.pem, impossibly tiny 0 bytes
	I0904 21:09:32.694025  427426 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca-key.pem (1679 bytes)
	I0904 21:09:32.694044  427426 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem (1078 bytes)
	I0904 21:09:32.694062  427426 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/cert.pem (1123 bytes)
	I0904 21:09:32.694078  427426 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/key.pem (1675 bytes)
	I0904 21:09:32.694109  427426 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/files/etc/ssl/certs/3883602.pem (1708 bytes)
	I0904 21:09:32.694680  427426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 21:09:32.715204  427426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0904 21:09:32.736001  427426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 21:09:32.756149  427426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0904 21:09:32.776848  427426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0904 21:09:32.797461  427426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0904 21:09:32.817893  427426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 21:09:32.838636  427426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0904 21:09:32.859190  427426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 21:09:32.878986  427426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/certs/388360.pem --> /usr/share/ca-certificates/388360.pem (1338 bytes)
	I0904 21:09:32.899262  427426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/files/etc/ssl/certs/3883602.pem --> /usr/share/ca-certificates/3883602.pem (1708 bytes)
	I0904 21:09:32.919308  427426 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0904 21:09:32.934457  427426 ssh_runner.go:195] Run: openssl version
	I0904 21:09:32.939071  427426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 21:09:32.947327  427426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 21:09:32.950337  427426 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  4 20:56 /usr/share/ca-certificates/minikubeCA.pem
	I0904 21:09:32.950375  427426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 21:09:32.956215  427426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 21:09:32.963763  427426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/388360.pem && ln -fs /usr/share/ca-certificates/388360.pem /etc/ssl/certs/388360.pem"
	I0904 21:09:32.971823  427426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/388360.pem
	I0904 21:09:32.974813  427426 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  4 21:07 /usr/share/ca-certificates/388360.pem
	I0904 21:09:32.974849  427426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/388360.pem
	I0904 21:09:32.981029  427426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/388360.pem /etc/ssl/certs/51391683.0"
	I0904 21:09:32.988531  427426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3883602.pem && ln -fs /usr/share/ca-certificates/3883602.pem /etc/ssl/certs/3883602.pem"
	I0904 21:09:32.996469  427426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3883602.pem
	I0904 21:09:32.999387  427426 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  4 21:07 /usr/share/ca-certificates/3883602.pem
	I0904 21:09:32.999417  427426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3883602.pem
	I0904 21:09:33.005494  427426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3883602.pem /etc/ssl/certs/3ec20f2e.0"
	I0904 21:09:33.013035  427426 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 21:09:33.015978  427426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0904 21:09:33.021883  427426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0904 21:09:33.027556  427426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0904 21:09:33.033125  427426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0904 21:09:33.038967  427426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0904 21:09:33.044497  427426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0904 21:09:33.050050  427426 kubeadm.go:392] StartCluster: {Name:functional-434682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-434682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 21:09:33.050142  427426 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0904 21:09:33.050172  427426 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0904 21:09:33.083435  427426 cri.go:89] found id: "9aae164b6b622e57cfa63c0d10780b1d415a846e61eef71f81fb632601cbf077"
	I0904 21:09:33.083450  427426 cri.go:89] found id: "4e0bb8dd41e21f2833e9485ae206c36d37cd22fc7e45e93b5edac3a2971b5f0d"
	I0904 21:09:33.083454  427426 cri.go:89] found id: "d1bf470c950376569202b921adaecbf97801a2ebfe9fffcfc07500259775d103"
	I0904 21:09:33.083457  427426 cri.go:89] found id: "0fd149fb30af74d582046fc60a9f80ce2cd48cec39f94992069066c32e3c7cb2"
	I0904 21:09:33.083460  427426 cri.go:89] found id: "147b1660e0ddf97548b185775163fa70312e80a72b723002562b6c48722dc082"
	I0904 21:09:33.083464  427426 cri.go:89] found id: "64aaae8d657daf42d90854673aa7ce9152f3f2314bae53b22f99da336581d403"
	I0904 21:09:33.083467  427426 cri.go:89] found id: "f74416e4a5adc00b9607edad20bae2daa3f0875f93376105c1a35c9e998b2392"
	I0904 21:09:33.083469  427426 cri.go:89] found id: "753221e723ff279278be4081dd108fed0fe299d51e26bf89fcbd7b19210b8ee2"
	I0904 21:09:33.083472  427426 cri.go:89] found id: "279dbefad36138dca3e8b3083d9738f2befb8c0fdb16fc5221ce1d1045032b84"
	I0904 21:09:33.083480  427426 cri.go:89] found id: "f9d74c2b858ae49842e0adccba71f5370114400fb309630986703d1ce2392fd9"
	I0904 21:09:33.083483  427426 cri.go:89] found id: "bae7f7748e3ed3e15e34412a70e8b212313727408eca9300328da268b231e549"
	I0904 21:09:33.083486  427426 cri.go:89] found id: "92502b10b160b6d76312e20c0abc179495a571aaec51a12d27cda5d819bb9059"
	I0904 21:09:33.083504  427426 cri.go:89] found id: "e9297e068100c821ab8006290e2bc83bc7ba46656546d2bf1370e0f93bfe6b1d"
	I0904 21:09:33.083507  427426 cri.go:89] found id: "077927d2e257a157021d4329d81ac341e9429f2909c4002d391942aede49e58a"
	I0904 21:09:33.083509  427426 cri.go:89] found id: "c62a7ac4db3db9efd667d8c912812c5351ff0008c244007fff704b49c2252a3c"
	I0904 21:09:33.083513  427426 cri.go:89] found id: "c81776aca109af6344df8f2c7479e7584e5a25c5c36f58ee94cffee56b350099"
	I0904 21:09:33.083515  427426 cri.go:89] found id: ""
	I0904 21:09:33.083560  427426 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-434682 -n functional-434682
helpers_test.go:269: (dbg) Run:  kubectl --context functional-434682 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-8n82x hello-node-connect-7d85dfc575-84wxx mysql-5bb876957f-wzh2r nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-xj66s kubernetes-dashboard-855c9754f9-5nqcb
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-434682 describe pod busybox-mount hello-node-75c85bcc94-8n82x hello-node-connect-7d85dfc575-84wxx mysql-5bb876957f-wzh2r nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-xj66s kubernetes-dashboard-855c9754f9-5nqcb
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-434682 describe pod busybox-mount hello-node-75c85bcc94-8n82x hello-node-connect-7d85dfc575-84wxx mysql-5bb876957f-wzh2r nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-xj66s kubernetes-dashboard-855c9754f9-5nqcb: exit status 1 (96.336362ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-434682/192.168.49.2
	Start Time:       Thu, 04 Sep 2025 21:16:14 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://7573ae495fc6e3c79942ee196188587a2b74c897d904477d1231fc3ca6208b33
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 04 Sep 2025 21:17:32 +0000
	      Finished:     Thu, 04 Sep 2025 21:17:32 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h9rcr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-h9rcr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  3m50s  default-scheduler  Successfully assigned default/busybox-mount to functional-434682
	  Normal  Pulling    3m50s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     2m32s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.225s (1m18.121s including waiting). Image size: 4631262 bytes.
	  Normal  Created    2m32s  kubelet            Created container: mount-munger
	  Normal  Started    2m32s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-8n82x
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-434682/192.168.49.2
	Start Time:       Thu, 04 Sep 2025 21:15:40 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6kdmp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6kdmp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  4m24s                default-scheduler  Successfully assigned default/hello-node-75c85bcc94-8n82x to functional-434682
	  Warning  Failed     60s (x3 over 3m34s)  kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     60s (x3 over 3m34s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    20s (x5 over 3m34s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     20s (x5 over 3m34s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    7s (x4 over 4m23s)   kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-84wxx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-434682/192.168.49.2
	Start Time:       Thu, 04 Sep 2025 21:10:08 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zbxg8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zbxg8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  9m56s                 default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-84wxx to functional-434682
	  Normal   Pulling    2m7s (x5 over 9m56s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     60s (x5 over 8m55s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     60s (x5 over 8m55s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    9s (x15 over 8m55s)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     9s (x15 over 8m55s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             mysql-5bb876957f-wzh2r
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-434682/192.168.49.2
	Start Time:       Thu, 04 Sep 2025 21:10:02 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hc4jr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hc4jr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/mysql-5bb876957f-wzh2r to functional-434682
	  Warning  Failed     9m26s                 kubelet            Failed to pull image "docker.io/mysql:5.7": initializing source docker://mysql:5.7: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m40s                 kubelet            Failed to pull image "docker.io/mysql:5.7": loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m34s (x5 over 10m)   kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     2m1s (x5 over 9m26s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m1s (x3 over 6m8s)   kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     53s (x16 over 9m25s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    1s (x20 over 9m25s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-434682/192.168.49.2
	Start Time:       Thu, 04 Sep 2025 21:10:04 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zhxrs (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zhxrs:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/nginx-svc to functional-434682
	  Warning  Failed     5m6s                    kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m34s (x3 over 8m55s)   kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m34s (x4 over 8m55s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    2m19s (x10 over 8m55s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2m19s (x10 over 8m55s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    2m6s (x5 over 10m)      kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-434682/192.168.49.2
	Start Time:       Thu, 04 Sep 2025 21:10:08 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bx7br (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-bx7br:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m56s                  default-scheduler  Successfully assigned default/sp-pod to functional-434682
	  Warning  Failed     4m36s (x3 over 8m25s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m33s (x4 over 8m25s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m33s                  kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:f15190cd0aed34df2541e6a569d349858dd83fe2a519d7c0ec023133b6d3c4f7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    77s (x10 over 8m24s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     77s (x10 over 8m24s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    62s (x5 over 9m55s)    kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-xj66s" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-5nqcb" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-434682 describe pod busybox-mount hello-node-75c85bcc94-8n82x hello-node-connect-7d85dfc575-84wxx mysql-5bb876957f-wzh2r nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-xj66s kubernetes-dashboard-855c9754f9-5nqcb: exit status 1
--- FAIL: TestFunctional/parallel/MySQL (602.80s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-434682 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [1ec180eb-eb78-40a3-aab9-f321efb0233d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-434682 -n functional-434682
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2025-09-04 21:14:04.400231824 +0000 UTC m=+1121.243834394
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-434682 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-434682 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-434682/192.168.49.2
Start Time:       Thu, 04 Sep 2025 21:10:04 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:  10.244.0.5
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zhxrs (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-zhxrs:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  4m                   default-scheduler  Successfully assigned default/nginx-svc to functional-434682
Warning  Failed     69s (x2 over 2m55s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     69s (x2 over 2m55s)  kubelet            Error: ErrImagePull
Normal   BackOff    57s (x2 over 2m55s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     57s (x2 over 2m55s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    44s (x3 over 4m)     kubelet            Pulling image "docker.io/nginx:alpine"
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-434682 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-434682 logs nginx-svc -n default: exit status 1 (65.121109ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-434682 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (95.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I0904 21:14:04.525996  388360 retry.go:31] will retry after 4.0945899s: Temporary Error: Get "http:": http: no Host in request URL
I0904 21:14:08.621556  388360 retry.go:31] will retry after 3.585734814s: Temporary Error: Get "http:": http: no Host in request URL
I0904 21:14:12.207508  388360 retry.go:31] will retry after 3.695734152s: Temporary Error: Get "http:": http: no Host in request URL
I0904 21:14:15.904015  388360 retry.go:31] will retry after 7.454111583s: Temporary Error: Get "http:": http: no Host in request URL
I0904 21:14:23.358545  388360 retry.go:31] will retry after 7.613699813s: Temporary Error: Get "http:": http: no Host in request URL
I0904 21:14:30.972403  388360 retry.go:31] will retry after 29.30886655s: Temporary Error: Get "http:": http: no Host in request URL
I0904 21:15:00.281545  388360 retry.go:31] will retry after 40.180331674s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-434682 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)        AGE
nginx-svc   LoadBalancer   10.107.112.123   10.107.112.123   80:32186/TCP   5m36s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (95.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-434682 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-434682 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-8n82x" [795d04f0-d02a-434e-a4ab-297ee10360de] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-434682 -n functional-434682
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-09-04 21:25:41.038071142 +0000 UTC m=+1817.881673696
functional_test.go:1460: (dbg) Run:  kubectl --context functional-434682 describe po hello-node-75c85bcc94-8n82x -n default
functional_test.go:1460: (dbg) kubectl --context functional-434682 describe po hello-node-75c85bcc94-8n82x -n default:
Name:             hello-node-75c85bcc94-8n82x
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-434682/192.168.49.2
Start Time:       Thu, 04 Sep 2025 21:15:40 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6kdmp (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-6kdmp:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-8n82x to functional-434682
Normal   Pulling    2m41s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     107s (x5 over 9m11s)  kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     107s (x5 over 9m11s)  kubelet            Error: ErrImagePull
Warning  Failed     24s (x16 over 9m11s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    11s (x17 over 9m11s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-434682 logs hello-node-75c85bcc94-8n82x -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-434682 logs hello-node-75c85bcc94-8n82x -n default: exit status 1 (66.744244ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-8n82x" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-434682 logs hello-node-75c85bcc94-8n82x -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-434682 service --namespace=default --https --url hello-node: exit status 115 (508.20594ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:32571
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-434682 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-434682 service hello-node --url --format={{.IP}}: exit status 115 (512.823357ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-434682 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-434682 service hello-node --url: exit status 115 (503.630767ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:32571
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-434682 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32571
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (931.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-364928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-364928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: exit status 80 (15m31.507885665s)

                                                
                                                
-- stdout --
	* [calico-364928] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21490
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21490-384635/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-384635/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "calico-364928" primary control-plane node in "calico-364928" cluster
	* Pulling base image v0.0.47-1756116447-21413 ...
	* Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 21:58:33.488004  695010 out.go:360] Setting OutFile to fd 1 ...
	I0904 21:58:33.488266  695010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:58:33.488276  695010 out.go:374] Setting ErrFile to fd 2...
	I0904 21:58:33.488280  695010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:58:33.488456  695010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-384635/.minikube/bin
	I0904 21:58:33.489176  695010 out.go:368] Setting JSON to false
	I0904 21:58:33.490450  695010 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":13262,"bootTime":1757009851,"procs":298,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 21:58:33.490540  695010 start.go:140] virtualization: kvm guest
	I0904 21:58:33.493319  695010 out.go:179] * [calico-364928] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 21:58:33.495486  695010 out.go:179]   - MINIKUBE_LOCATION=21490
	I0904 21:58:33.495501  695010 notify.go:220] Checking for updates...
	I0904 21:58:33.497875  695010 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 21:58:33.499041  695010 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21490-384635/kubeconfig
	I0904 21:58:33.500250  695010 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-384635/.minikube
	I0904 21:58:33.501514  695010 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 21:58:33.502670  695010 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 21:58:33.504271  695010 config.go:182] Loaded profile config "auto-364928": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 21:58:33.504396  695010 config.go:182] Loaded profile config "default-k8s-diff-port-601847": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 21:58:33.504494  695010 config.go:182] Loaded profile config "kindnet-364928": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 21:58:33.504620  695010 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 21:58:33.530579  695010 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 21:58:33.530665  695010 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 21:58:33.589900  695010 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-09-04 21:58:33.578747151 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 21:58:33.590003  695010 docker.go:318] overlay module found
	I0904 21:58:33.591707  695010 out.go:179] * Using the docker driver based on user configuration
	I0904 21:58:33.594375  695010 start.go:304] selected driver: docker
	I0904 21:58:33.594406  695010 start.go:918] validating driver "docker" against <nil>
	I0904 21:58:33.594421  695010 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 21:58:33.595577  695010 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 21:58:33.649404  695010 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-09-04 21:58:33.639871462 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 21:58:33.649597  695010 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0904 21:58:33.649814  695010 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 21:58:33.651352  695010 out.go:179] * Using Docker driver with root privileges
	I0904 21:58:33.652436  695010 cni.go:84] Creating CNI manager for "calico"
	I0904 21:58:33.652455  695010 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I0904 21:58:33.652538  695010 start.go:348] cluster config:
	{Name:calico-364928 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-364928 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I0904 21:58:33.653823  695010 out.go:179] * Starting "calico-364928" primary control-plane node in "calico-364928" cluster
	I0904 21:58:33.654982  695010 cache.go:123] Beginning downloading kic base image for docker with crio
	I0904 21:58:33.656143  695010 out.go:179] * Pulling base image v0.0.47-1756116447-21413 ...
	I0904 21:58:33.657073  695010 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 21:58:33.657117  695010 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21490-384635/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0904 21:58:33.657129  695010 cache.go:58] Caching tarball of preloaded images
	I0904 21:58:33.657197  695010 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 in local docker daemon
	I0904 21:58:33.657245  695010 preload.go:172] Found /home/jenkins/minikube-integration/21490-384635/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0904 21:58:33.657260  695010 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0904 21:58:33.657378  695010 profile.go:143] Saving config to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/calico-364928/config.json ...
	I0904 21:58:33.657404  695010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/calico-364928/config.json: {Name:mka9fc982769a647c2131cf83078f9b255de14ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 21:58:33.677744  695010 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 in local docker daemon, skipping pull
	I0904 21:58:33.677762  695010 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 exists in daemon, skipping load
	I0904 21:58:33.677780  695010 cache.go:232] Successfully downloaded all kic artifacts
	I0904 21:58:33.677809  695010 start.go:360] acquireMachinesLock for calico-364928: {Name:mk8faf7e42786c86a9ceec8eb8abda6b059d63ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 21:58:33.677917  695010 start.go:364] duration metric: took 86.383µs to acquireMachinesLock for "calico-364928"
	I0904 21:58:33.677941  695010 start.go:93] Provisioning new machine with config: &{Name:calico-364928 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-364928 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 21:58:33.677999  695010 start.go:125] createHost starting for "" (driver="docker")
	I0904 21:58:33.680478  695010 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0904 21:58:33.680697  695010 start.go:159] libmachine.API.Create for "calico-364928" (driver="docker")
	I0904 21:58:33.680726  695010 client.go:168] LocalClient.Create starting
	I0904 21:58:33.680839  695010 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem
	I0904 21:58:33.680880  695010 main.go:141] libmachine: Decoding PEM data...
	I0904 21:58:33.680895  695010 main.go:141] libmachine: Parsing certificate...
	I0904 21:58:33.680961  695010 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21490-384635/.minikube/certs/cert.pem
	I0904 21:58:33.680980  695010 main.go:141] libmachine: Decoding PEM data...
	I0904 21:58:33.680994  695010 main.go:141] libmachine: Parsing certificate...
	I0904 21:58:33.681332  695010 cli_runner.go:164] Run: docker network inspect calico-364928 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0904 21:58:33.699534  695010 cli_runner.go:211] docker network inspect calico-364928 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0904 21:58:33.699657  695010 network_create.go:284] running [docker network inspect calico-364928] to gather additional debugging logs...
	I0904 21:58:33.699683  695010 cli_runner.go:164] Run: docker network inspect calico-364928
	W0904 21:58:33.718904  695010 cli_runner.go:211] docker network inspect calico-364928 returned with exit code 1
	I0904 21:58:33.718949  695010 network_create.go:287] error running [docker network inspect calico-364928]: docker network inspect calico-364928: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-364928 not found
	I0904 21:58:33.718970  695010 network_create.go:289] output of [docker network inspect calico-364928]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-364928 not found
	
	** /stderr **
	I0904 21:58:33.719194  695010 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0904 21:58:33.737630  695010 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5502e71d097a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:ef:c1:96:ed:36} reservation:<nil>}
	I0904 21:58:33.738701  695010 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e63f0d636ac7 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:63:34:a9:e4:57} reservation:<nil>}
	I0904 21:58:33.739281  695010 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-66f991fb509e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:12:87:15:f5:6e:d8} reservation:<nil>}
	I0904 21:58:33.739906  695010 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-bf0745940238 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:b6:9d:2a:98:20:f7} reservation:<nil>}
	I0904 21:58:33.740594  695010 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-405fa3a98958 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:4a:60:73:ea:52:66} reservation:<nil>}
	I0904 21:58:33.741368  695010 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-ba68c04842cf IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:d6:8e:7c:3d:35:2b} reservation:<nil>}
	I0904 21:58:33.742316  695010 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f96bb0}
	I0904 21:58:33.742338  695010 network_create.go:124] attempt to create docker network calico-364928 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0904 21:58:33.742391  695010 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-364928 calico-364928
	I0904 21:58:33.801452  695010 network_create.go:108] docker network calico-364928 192.168.103.0/24 created
	I0904 21:58:33.801481  695010 kic.go:121] calculated static IP "192.168.103.2" for the "calico-364928" container
	I0904 21:58:33.801535  695010 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0904 21:58:33.817701  695010 cli_runner.go:164] Run: docker volume create calico-364928 --label name.minikube.sigs.k8s.io=calico-364928 --label created_by.minikube.sigs.k8s.io=true
	I0904 21:58:33.834946  695010 oci.go:103] Successfully created a docker volume calico-364928
	I0904 21:58:33.835009  695010 cli_runner.go:164] Run: docker run --rm --name calico-364928-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-364928 --entrypoint /usr/bin/test -v calico-364928:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 -d /var/lib
	I0904 21:58:34.318476  695010 oci.go:107] Successfully prepared a docker volume calico-364928
	I0904 21:58:34.318531  695010 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 21:58:34.318555  695010 kic.go:194] Starting extracting preloaded images to volume ...
	I0904 21:58:34.318628  695010 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21490-384635/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-364928:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 -I lz4 -xf /preloaded.tar -C /extractDir
	I0904 21:58:40.202663  695010 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21490-384635/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-364928:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 -I lz4 -xf /preloaded.tar -C /extractDir: (5.883985335s)
	I0904 21:58:40.202692  695010 kic.go:203] duration metric: took 5.884134099s to extract preloaded images to volume ...
	W0904 21:58:40.202810  695010 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0904 21:58:40.202907  695010 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0904 21:58:40.280325  695010 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-364928 --name calico-364928 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-364928 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-364928 --network calico-364928 --ip 192.168.103.2 --volume calico-364928:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9
	I0904 21:58:40.637708  695010 cli_runner.go:164] Run: docker container inspect calico-364928 --format={{.State.Running}}
	I0904 21:58:40.659317  695010 cli_runner.go:164] Run: docker container inspect calico-364928 --format={{.State.Status}}
	I0904 21:58:40.682504  695010 cli_runner.go:164] Run: docker exec calico-364928 stat /var/lib/dpkg/alternatives/iptables
	I0904 21:58:40.726371  695010 oci.go:144] the created container "calico-364928" has a running status.
	I0904 21:58:40.726419  695010 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21490-384635/.minikube/machines/calico-364928/id_rsa...
	I0904 21:58:41.004070  695010 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21490-384635/.minikube/machines/calico-364928/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0904 21:58:41.030632  695010 cli_runner.go:164] Run: docker container inspect calico-364928 --format={{.State.Status}}
	I0904 21:58:41.058040  695010 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0904 21:58:41.058057  695010 kic_runner.go:114] Args: [docker exec --privileged calico-364928 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0904 21:58:41.150624  695010 cli_runner.go:164] Run: docker container inspect calico-364928 --format={{.State.Status}}
	I0904 21:58:41.183120  695010 machine.go:93] provisionDockerMachine start ...
	I0904 21:58:41.183230  695010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-364928
	I0904 21:58:41.216776  695010 main.go:141] libmachine: Using SSH client type: native
	I0904 21:58:41.217519  695010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33490 <nil> <nil>}
	I0904 21:58:41.217593  695010 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 21:58:41.348028  695010 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-364928
	
	I0904 21:58:41.348056  695010 ubuntu.go:182] provisioning hostname "calico-364928"
	I0904 21:58:41.348122  695010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-364928
	I0904 21:58:41.372830  695010 main.go:141] libmachine: Using SSH client type: native
	I0904 21:58:41.373070  695010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33490 <nil> <nil>}
	I0904 21:58:41.373087  695010 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-364928 && echo "calico-364928" | sudo tee /etc/hostname
	I0904 21:58:41.505814  695010 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-364928
	
	I0904 21:58:41.505906  695010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-364928
	I0904 21:58:41.525366  695010 main.go:141] libmachine: Using SSH client type: native
	I0904 21:58:41.525610  695010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33490 <nil> <nil>}
	I0904 21:58:41.525631  695010 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-364928' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-364928/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-364928' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 21:58:41.640437  695010 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 21:58:41.640468  695010 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21490-384635/.minikube CaCertPath:/home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21490-384635/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21490-384635/.minikube}
	I0904 21:58:41.640498  695010 ubuntu.go:190] setting up certificates
	I0904 21:58:41.640511  695010 provision.go:84] configureAuth start
	I0904 21:58:41.640561  695010 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-364928
	I0904 21:58:41.658098  695010 provision.go:143] copyHostCerts
	I0904 21:58:41.658160  695010 exec_runner.go:144] found /home/jenkins/minikube-integration/21490-384635/.minikube/cert.pem, removing ...
	I0904 21:58:41.658171  695010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21490-384635/.minikube/cert.pem
	I0904 21:58:41.658229  695010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21490-384635/.minikube/cert.pem (1123 bytes)
	I0904 21:58:41.658311  695010 exec_runner.go:144] found /home/jenkins/minikube-integration/21490-384635/.minikube/key.pem, removing ...
	I0904 21:58:41.658336  695010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21490-384635/.minikube/key.pem
	I0904 21:58:41.658372  695010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21490-384635/.minikube/key.pem (1675 bytes)
	I0904 21:58:41.658431  695010 exec_runner.go:144] found /home/jenkins/minikube-integration/21490-384635/.minikube/ca.pem, removing ...
	I0904 21:58:41.658440  695010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21490-384635/.minikube/ca.pem
	I0904 21:58:41.658470  695010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21490-384635/.minikube/ca.pem (1078 bytes)
	I0904 21:58:41.658535  695010 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21490-384635/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca-key.pem org=jenkins.calico-364928 san=[127.0.0.1 192.168.103.2 calico-364928 localhost minikube]
	I0904 21:58:41.847052  695010 provision.go:177] copyRemoteCerts
	I0904 21:58:41.847106  695010 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 21:58:41.847141  695010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-364928
	I0904 21:58:41.864141  695010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/calico-364928/id_rsa Username:docker}
	I0904 21:58:41.950332  695010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0904 21:58:41.975837  695010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0904 21:58:41.999317  695010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0904 21:58:42.025221  695010 provision.go:87] duration metric: took 384.692282ms to configureAuth
	I0904 21:58:42.025255  695010 ubuntu.go:206] setting minikube options for container-runtime
	I0904 21:58:42.025460  695010 config.go:182] Loaded profile config "calico-364928": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 21:58:42.025610  695010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-364928
	I0904 21:58:42.046451  695010 main.go:141] libmachine: Using SSH client type: native
	I0904 21:58:42.046754  695010 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33490 <nil> <nil>}
	I0904 21:58:42.046781  695010 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0904 21:58:42.276016  695010 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0904 21:58:42.276048  695010 machine.go:96] duration metric: took 1.092904895s to provisionDockerMachine
	I0904 21:58:42.276061  695010 client.go:171] duration metric: took 8.595324767s to LocalClient.Create
	I0904 21:58:42.276084  695010 start.go:167] duration metric: took 8.595387573s to libmachine.API.Create "calico-364928"
	I0904 21:58:42.276093  695010 start.go:293] postStartSetup for "calico-364928" (driver="docker")
	I0904 21:58:42.276105  695010 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 21:58:42.276180  695010 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 21:58:42.276232  695010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-364928
	I0904 21:58:42.295544  695010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/calico-364928/id_rsa Username:docker}
	I0904 21:58:42.381363  695010 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 21:58:42.384633  695010 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0904 21:58:42.384660  695010 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0904 21:58:42.384667  695010 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0904 21:58:42.384674  695010 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0904 21:58:42.384683  695010 filesync.go:126] Scanning /home/jenkins/minikube-integration/21490-384635/.minikube/addons for local assets ...
	I0904 21:58:42.384731  695010 filesync.go:126] Scanning /home/jenkins/minikube-integration/21490-384635/.minikube/files for local assets ...
	I0904 21:58:42.384840  695010 filesync.go:149] local asset: /home/jenkins/minikube-integration/21490-384635/.minikube/files/etc/ssl/certs/3883602.pem -> 3883602.pem in /etc/ssl/certs
	I0904 21:58:42.384966  695010 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0904 21:58:42.392724  695010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/files/etc/ssl/certs/3883602.pem --> /etc/ssl/certs/3883602.pem (1708 bytes)
	I0904 21:58:42.414248  695010 start.go:296] duration metric: took 138.139312ms for postStartSetup
	I0904 21:58:42.414633  695010 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-364928
	I0904 21:58:42.433217  695010 profile.go:143] Saving config to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/calico-364928/config.json ...
	I0904 21:58:42.433438  695010 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 21:58:42.433479  695010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-364928
	I0904 21:58:42.451993  695010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/calico-364928/id_rsa Username:docker}
	I0904 21:58:42.538454  695010 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0904 21:58:42.543375  695010 start.go:128] duration metric: took 8.865359291s to createHost
	I0904 21:58:42.543401  695010 start.go:83] releasing machines lock for "calico-364928", held for 8.86547251s
	I0904 21:58:42.543470  695010 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-364928
	I0904 21:58:42.565042  695010 ssh_runner.go:195] Run: cat /version.json
	I0904 21:58:42.565070  695010 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 21:58:42.565099  695010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-364928
	I0904 21:58:42.565158  695010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-364928
	I0904 21:58:42.584578  695010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/calico-364928/id_rsa Username:docker}
	I0904 21:58:42.584924  695010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/calico-364928/id_rsa Username:docker}
	I0904 21:58:42.769635  695010 ssh_runner.go:195] Run: systemctl --version
	I0904 21:58:42.774432  695010 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0904 21:58:42.917681  695010 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0904 21:58:42.921944  695010 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 21:58:42.939020  695010 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0904 21:58:42.939091  695010 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 21:58:42.967193  695010 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0904 21:58:42.967237  695010 start.go:495] detecting cgroup driver to use...
	I0904 21:58:42.967282  695010 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0904 21:58:42.967335  695010 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 21:58:42.983649  695010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 21:58:42.995815  695010 docker.go:218] disabling cri-docker service (if available) ...
	I0904 21:58:42.995882  695010 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0904 21:58:43.008863  695010 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0904 21:58:43.024266  695010 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0904 21:58:43.104885  695010 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0904 21:58:43.196557  695010 docker.go:234] disabling docker service ...
	I0904 21:58:43.196622  695010 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0904 21:58:43.215369  695010 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0904 21:58:43.226579  695010 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0904 21:58:43.315865  695010 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0904 21:58:43.406084  695010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 21:58:43.417942  695010 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 21:58:43.435163  695010 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0904 21:58:43.435213  695010 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:58:43.445563  695010 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0904 21:58:43.445632  695010 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:58:43.456547  695010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:58:43.466069  695010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:58:43.475004  695010 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 21:58:43.483584  695010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:58:43.492847  695010 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:58:43.506967  695010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:58:43.516045  695010 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 21:58:43.523476  695010 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 21:58:43.530878  695010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 21:58:43.625825  695010 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0904 21:58:43.730695  695010 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0904 21:58:43.730760  695010 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0904 21:58:43.735904  695010 start.go:563] Will wait 60s for crictl version
	I0904 21:58:43.735955  695010 ssh_runner.go:195] Run: which crictl
	I0904 21:58:43.739212  695010 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 21:58:43.774354  695010 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0904 21:58:43.774450  695010 ssh_runner.go:195] Run: crio --version
	I0904 21:58:43.808954  695010 ssh_runner.go:195] Run: crio --version
	I0904 21:58:43.847323  695010 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0904 21:58:43.848610  695010 cli_runner.go:164] Run: docker network inspect calico-364928 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0904 21:58:43.869360  695010 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0904 21:58:43.872971  695010 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 21:58:43.884051  695010 kubeadm.go:875] updating cluster {Name:calico-364928 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-364928 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0904 21:58:43.884162  695010 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 21:58:43.884208  695010 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 21:58:43.970607  695010 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 21:58:43.970632  695010 crio.go:433] Images already preloaded, skipping extraction
	I0904 21:58:43.970686  695010 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 21:58:44.007562  695010 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 21:58:44.007588  695010 cache_images.go:85] Images are preloaded, skipping loading
	I0904 21:58:44.007597  695010 kubeadm.go:926] updating node { 192.168.103.2 8443 v1.34.0 crio true true} ...
	I0904 21:58:44.007683  695010 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-364928 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:calico-364928 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I0904 21:58:44.007743  695010 ssh_runner.go:195] Run: crio config
	I0904 21:58:44.051193  695010 cni.go:84] Creating CNI manager for "calico"
	I0904 21:58:44.051219  695010 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0904 21:58:44.051249  695010 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-364928 NodeName:calico-364928 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0904 21:58:44.051401  695010 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-364928"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0904 21:58:44.051467  695010 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0904 21:58:44.059876  695010 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 21:58:44.059947  695010 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0904 21:58:44.068366  695010 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I0904 21:58:44.083810  695010 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 21:58:44.100894  695010 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I0904 21:58:44.117948  695010 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0904 21:58:44.121215  695010 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 21:58:44.131358  695010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 21:58:44.214309  695010 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 21:58:44.229021  695010 certs.go:68] Setting up /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/calico-364928 for IP: 192.168.103.2
	I0904 21:58:44.229055  695010 certs.go:194] generating shared ca certs ...
	I0904 21:58:44.229077  695010 certs.go:226] acquiring lock for ca certs: {Name:mkab35eaf89c739a644dd45428dbbd1b30c489d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 21:58:44.229267  695010 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21490-384635/.minikube/ca.key
	I0904 21:58:44.229336  695010 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21490-384635/.minikube/proxy-client-ca.key
	I0904 21:58:44.229350  695010 certs.go:256] generating profile certs ...
	I0904 21:58:44.229414  695010 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/calico-364928/client.key
	I0904 21:58:44.229429  695010 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/calico-364928/client.crt with IP's: []
	I0904 21:58:44.490405  695010 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/calico-364928/client.crt ...
	I0904 21:58:44.490431  695010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/calico-364928/client.crt: {Name:mk723d0d8409f3b9f9e76745101b0aa28df61d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 21:58:44.490603  695010 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/calico-364928/client.key ...
	I0904 21:58:44.490617  695010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/calico-364928/client.key: {Name:mk349305a02f1496fddf7aeb8a56179a87d16f20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 21:58:44.490727  695010 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/calico-364928/apiserver.key.9e892600
	I0904 21:58:44.490743  695010 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/calico-364928/apiserver.crt.9e892600 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0904 21:58:44.752377  695010 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/calico-364928/apiserver.crt.9e892600 ...
	I0904 21:58:44.752411  695010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/calico-364928/apiserver.crt.9e892600: {Name:mk3b4de6e9323abd0401b116c03182240973cbc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 21:58:44.752597  695010 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/calico-364928/apiserver.key.9e892600 ...
	I0904 21:58:44.752622  695010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/calico-364928/apiserver.key.9e892600: {Name:mk6f306d0264770664b78a10f54508a562add1a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 21:58:44.752767  695010 certs.go:381] copying /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/calico-364928/apiserver.crt.9e892600 -> /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/calico-364928/apiserver.crt
	I0904 21:58:44.752883  695010 certs.go:385] copying /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/calico-364928/apiserver.key.9e892600 -> /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/calico-364928/apiserver.key
	I0904 21:58:44.752967  695010 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/calico-364928/proxy-client.key
	I0904 21:58:44.752989  695010 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/calico-364928/proxy-client.crt with IP's: []
	I0904 21:58:45.197532  695010 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/calico-364928/proxy-client.crt ...
	I0904 21:58:45.197567  695010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/calico-364928/proxy-client.crt: {Name:mk928cc7e8979fe07d1ea2ce928db4feb9262f4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 21:58:45.197731  695010 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/calico-364928/proxy-client.key ...
	I0904 21:58:45.197747  695010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/calico-364928/proxy-client.key: {Name:mk4ec7316f1fcdeaffc74ca23060adbe7025c92c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 21:58:45.197954  695010 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/388360.pem (1338 bytes)
	W0904 21:58:45.197997  695010 certs.go:480] ignoring /home/jenkins/minikube-integration/21490-384635/.minikube/certs/388360_empty.pem, impossibly tiny 0 bytes
	I0904 21:58:45.198009  695010 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca-key.pem (1679 bytes)
	I0904 21:58:45.198029  695010 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem (1078 bytes)
	I0904 21:58:45.198051  695010 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/cert.pem (1123 bytes)
	I0904 21:58:45.198075  695010 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/key.pem (1675 bytes)
	I0904 21:58:45.198110  695010 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/files/etc/ssl/certs/3883602.pem (1708 bytes)
	I0904 21:58:45.198642  695010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 21:58:45.221649  695010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0904 21:58:45.243282  695010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 21:58:45.264438  695010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0904 21:58:45.285236  695010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/calico-364928/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0904 21:58:45.305921  695010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/calico-364928/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0904 21:58:45.326523  695010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/calico-364928/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 21:58:45.347323  695010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/calico-364928/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0904 21:58:45.368146  695010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/certs/388360.pem --> /usr/share/ca-certificates/388360.pem (1338 bytes)
	I0904 21:58:45.389560  695010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/files/etc/ssl/certs/3883602.pem --> /usr/share/ca-certificates/3883602.pem (1708 bytes)
	I0904 21:58:45.410631  695010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 21:58:45.431877  695010 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0904 21:58:45.447376  695010 ssh_runner.go:195] Run: openssl version
	I0904 21:58:45.452205  695010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3883602.pem && ln -fs /usr/share/ca-certificates/3883602.pem /etc/ssl/certs/3883602.pem"
	I0904 21:58:45.461117  695010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3883602.pem
	I0904 21:58:45.464732  695010 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  4 21:07 /usr/share/ca-certificates/3883602.pem
	I0904 21:58:45.464809  695010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3883602.pem
	I0904 21:58:45.471266  695010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3883602.pem /etc/ssl/certs/3ec20f2e.0"
	I0904 21:58:45.479787  695010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 21:58:45.488469  695010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 21:58:45.491650  695010 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  4 20:56 /usr/share/ca-certificates/minikubeCA.pem
	I0904 21:58:45.491708  695010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 21:58:45.498044  695010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 21:58:45.506485  695010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/388360.pem && ln -fs /usr/share/ca-certificates/388360.pem /etc/ssl/certs/388360.pem"
	I0904 21:58:45.515036  695010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/388360.pem
	I0904 21:58:45.518269  695010 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  4 21:07 /usr/share/ca-certificates/388360.pem
	I0904 21:58:45.518308  695010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/388360.pem
	I0904 21:58:45.524509  695010 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/388360.pem /etc/ssl/certs/51391683.0"
	I0904 21:58:45.532966  695010 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 21:58:45.536250  695010 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0904 21:58:45.536305  695010 kubeadm.go:392] StartCluster: {Name:calico-364928 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-364928 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 21:58:45.536384  695010 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0904 21:58:45.536432  695010 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0904 21:58:45.569813  695010 cri.go:89] found id: ""
	I0904 21:58:45.569893  695010 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0904 21:58:45.578000  695010 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0904 21:58:45.585595  695010 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0904 21:58:45.585642  695010 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0904 21:58:45.593198  695010 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0904 21:58:45.593216  695010 kubeadm.go:157] found existing configuration files:
	
	I0904 21:58:45.593249  695010 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0904 21:58:45.600480  695010 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0904 21:58:45.600529  695010 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0904 21:58:45.607631  695010 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0904 21:58:45.615060  695010 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0904 21:58:45.615105  695010 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0904 21:58:45.622413  695010 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0904 21:58:45.629617  695010 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0904 21:58:45.629653  695010 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0904 21:58:45.637001  695010 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0904 21:58:45.644441  695010 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0904 21:58:45.644481  695010 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0904 21:58:45.652087  695010 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0904 21:58:45.703817  695010 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0904 21:58:45.704101  695010 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
	I0904 21:58:45.758272  695010 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0904 21:58:59.487156  695010 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0904 21:58:59.487249  695010 kubeadm.go:310] [preflight] Running pre-flight checks
	I0904 21:58:59.487385  695010 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0904 21:58:59.487489  695010 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1083-gcp
	I0904 21:58:59.487571  695010 kubeadm.go:310] OS: Linux
	I0904 21:58:59.487638  695010 kubeadm.go:310] CGROUPS_CPU: enabled
	I0904 21:58:59.487712  695010 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0904 21:58:59.487779  695010 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0904 21:58:59.487859  695010 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0904 21:58:59.487936  695010 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0904 21:58:59.488020  695010 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0904 21:58:59.488110  695010 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0904 21:58:59.488191  695010 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0904 21:58:59.488259  695010 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0904 21:58:59.488392  695010 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0904 21:58:59.488531  695010 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0904 21:58:59.488665  695010 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0904 21:58:59.488747  695010 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0904 21:58:59.490433  695010 out.go:252]   - Generating certificates and keys ...
	I0904 21:58:59.490525  695010 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0904 21:58:59.490617  695010 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0904 21:58:59.490713  695010 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0904 21:58:59.490796  695010 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0904 21:58:59.490874  695010 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0904 21:58:59.490934  695010 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0904 21:58:59.491005  695010 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0904 21:58:59.491166  695010 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [calico-364928 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0904 21:58:59.491231  695010 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0904 21:58:59.491401  695010 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [calico-364928 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0904 21:58:59.491512  695010 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0904 21:58:59.491625  695010 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0904 21:58:59.491695  695010 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0904 21:58:59.491772  695010 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0904 21:58:59.491839  695010 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0904 21:58:59.491903  695010 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0904 21:58:59.491968  695010 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0904 21:58:59.492035  695010 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0904 21:58:59.492097  695010 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0904 21:58:59.492182  695010 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0904 21:58:59.492263  695010 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0904 21:58:59.493615  695010 out.go:252]   - Booting up control plane ...
	I0904 21:58:59.493769  695010 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0904 21:58:59.493902  695010 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0904 21:58:59.494016  695010 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0904 21:58:59.494168  695010 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0904 21:58:59.494337  695010 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0904 21:58:59.494499  695010 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0904 21:58:59.494609  695010 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0904 21:58:59.494661  695010 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0904 21:58:59.494818  695010 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0904 21:58:59.494966  695010 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0904 21:58:59.495053  695010 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501361099s
	I0904 21:58:59.495229  695010 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0904 21:58:59.495341  695010 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I0904 21:58:59.495457  695010 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0904 21:58:59.495567  695010 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0904 21:58:59.495665  695010 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 4.704559447s
	I0904 21:58:59.495729  695010 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 4.816872148s
	I0904 21:58:59.495799  695010 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.001645142s
	I0904 21:58:59.495898  695010 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0904 21:58:59.496034  695010 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0904 21:58:59.496167  695010 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0904 21:58:59.496457  695010 kubeadm.go:310] [mark-control-plane] Marking the node calico-364928 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0904 21:58:59.496575  695010 kubeadm.go:310] [bootstrap-token] Using token: 9ypm2z.45h6yrq1xgh3uaj7
	I0904 21:58:59.498075  695010 out.go:252]   - Configuring RBAC rules ...
	I0904 21:58:59.498197  695010 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0904 21:58:59.498279  695010 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0904 21:58:59.498444  695010 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0904 21:58:59.498576  695010 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0904 21:58:59.498669  695010 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0904 21:58:59.498737  695010 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0904 21:58:59.498826  695010 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0904 21:58:59.498868  695010 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0904 21:58:59.498903  695010 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0904 21:58:59.498906  695010 kubeadm.go:310] 
	I0904 21:58:59.498955  695010 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0904 21:58:59.498959  695010 kubeadm.go:310] 
	I0904 21:58:59.499018  695010 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0904 21:58:59.499021  695010 kubeadm.go:310] 
	I0904 21:58:59.499044  695010 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0904 21:58:59.499113  695010 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0904 21:58:59.499161  695010 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0904 21:58:59.499164  695010 kubeadm.go:310] 
	I0904 21:58:59.499205  695010 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0904 21:58:59.499208  695010 kubeadm.go:310] 
	I0904 21:58:59.499245  695010 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0904 21:58:59.499248  695010 kubeadm.go:310] 
	I0904 21:58:59.499290  695010 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0904 21:58:59.499347  695010 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0904 21:58:59.499417  695010 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0904 21:58:59.499424  695010 kubeadm.go:310] 
	I0904 21:58:59.499504  695010 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0904 21:58:59.499570  695010 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0904 21:58:59.499573  695010 kubeadm.go:310] 
	I0904 21:58:59.499637  695010 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9ypm2z.45h6yrq1xgh3uaj7 \
	I0904 21:58:59.499716  695010 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:27309edecc409b6b2cac280161d86abdcb5cc9f2633f123f165aab1e20ce61c4 \
	I0904 21:58:59.499732  695010 kubeadm.go:310] 	--control-plane 
	I0904 21:58:59.499735  695010 kubeadm.go:310] 
	I0904 21:58:59.499799  695010 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0904 21:58:59.499802  695010 kubeadm.go:310] 
	I0904 21:58:59.499865  695010 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9ypm2z.45h6yrq1xgh3uaj7 \
	I0904 21:58:59.499956  695010 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:27309edecc409b6b2cac280161d86abdcb5cc9f2633f123f165aab1e20ce61c4 
	I0904 21:58:59.499963  695010 cni.go:84] Creating CNI manager for "calico"
	I0904 21:58:59.501448  695010 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I0904 21:58:59.503728  695010 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0904 21:58:59.503757  695010 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (539470 bytes)
	I0904 21:58:59.524524  695010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0904 21:59:01.014216  695010 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.489655913s)
	I0904 21:59:01.014267  695010 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0904 21:59:01.014408  695010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 21:59:01.014520  695010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-364928 minikube.k8s.io/updated_at=2025_09_04T21_59_01_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=d82f852837f628b3930700b19196c39855cd258a minikube.k8s.io/name=calico-364928 minikube.k8s.io/primary=true
	I0904 21:59:01.052008  695010 ops.go:34] apiserver oom_adj: -16
	I0904 21:59:01.166534  695010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 21:59:01.667420  695010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 21:59:02.166929  695010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 21:59:02.666854  695010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 21:59:03.167586  695010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 21:59:03.667170  695010 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 21:59:03.737585  695010 kubeadm.go:1105] duration metric: took 2.723197788s to wait for elevateKubeSystemPrivileges
	I0904 21:59:03.737629  695010 kubeadm.go:394] duration metric: took 18.20132868s to StartCluster
	I0904 21:59:03.737648  695010 settings.go:142] acquiring lock: {Name:mke06342cfb6705345a5c7324f763dc44aea4569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 21:59:03.737707  695010 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21490-384635/kubeconfig
	I0904 21:59:03.738798  695010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/kubeconfig: {Name:mk6b311573f3fade9cba8f894d5c9f5ca76d1e25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 21:59:03.739024  695010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0904 21:59:03.739031  695010 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 21:59:03.739110  695010 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0904 21:59:03.739213  695010 addons.go:69] Setting storage-provisioner=true in profile "calico-364928"
	I0904 21:59:03.739228  695010 addons.go:69] Setting default-storageclass=true in profile "calico-364928"
	I0904 21:59:03.739213  695010 config.go:182] Loaded profile config "calico-364928": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 21:59:03.739253  695010 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-364928"
	I0904 21:59:03.739236  695010 addons.go:238] Setting addon storage-provisioner=true in "calico-364928"
	I0904 21:59:03.739332  695010 host.go:66] Checking if "calico-364928" exists ...
	I0904 21:59:03.739703  695010 cli_runner.go:164] Run: docker container inspect calico-364928 --format={{.State.Status}}
	I0904 21:59:03.739866  695010 cli_runner.go:164] Run: docker container inspect calico-364928 --format={{.State.Status}}
	I0904 21:59:03.740494  695010 out.go:179] * Verifying Kubernetes components...
	I0904 21:59:03.742141  695010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 21:59:03.765563  695010 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 21:59:03.766810  695010 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 21:59:03.766832  695010 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0904 21:59:03.766889  695010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-364928
	I0904 21:59:03.767393  695010 addons.go:238] Setting addon default-storageclass=true in "calico-364928"
	I0904 21:59:03.767439  695010 host.go:66] Checking if "calico-364928" exists ...
	I0904 21:59:03.767870  695010 cli_runner.go:164] Run: docker container inspect calico-364928 --format={{.State.Status}}
	I0904 21:59:03.787402  695010 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0904 21:59:03.787423  695010 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0904 21:59:03.787467  695010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-364928
	I0904 21:59:03.788507  695010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/calico-364928/id_rsa Username:docker}
	I0904 21:59:03.817925  695010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/calico-364928/id_rsa Username:docker}
	I0904 21:59:03.883643  695010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0904 21:59:03.978666  695010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 21:59:04.057753  695010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0904 21:59:04.057770  695010 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 21:59:04.564629  695010 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0904 21:59:04.934366  695010 node_ready.go:35] waiting up to 15m0s for node "calico-364928" to be "Ready" ...
	I0904 21:59:04.937955  695010 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0904 21:59:04.939044  695010 addons.go:514] duration metric: took 1.199933698s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0904 21:59:05.069023  695010 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-364928" context rescaled to 1 replicas
	W0904 21:59:06.936901  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 21:59:08.938614  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 21:59:11.437828  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 21:59:13.438157  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 21:59:15.937561  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 21:59:17.938094  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 21:59:20.437937  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 21:59:22.937647  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 21:59:24.937797  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 21:59:27.437411  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 21:59:29.937597  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 21:59:31.938414  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 21:59:34.436902  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 21:59:36.438236  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 21:59:38.937608  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 21:59:40.937888  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 21:59:43.436971  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 21:59:45.437436  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 21:59:47.937382  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 21:59:49.937915  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 21:59:51.938362  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 21:59:54.437228  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 21:59:56.937601  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 21:59:58.937745  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:00:00.938033  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:00:02.938104  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:00:05.438318  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:00:07.937888  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:00:09.937944  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:00:12.438430  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:00:14.937382  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:00:17.437134  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:00:19.437316  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:00:21.937653  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:00:24.438049  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:00:26.438108  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:00:28.438352  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:00:30.938445  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:00:33.437432  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:00:35.437710  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:00:37.937811  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:00:39.937960  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:00:42.437821  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:00:44.437988  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:00:46.937817  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:00:49.438116  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:00:51.938028  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:00:54.437978  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:00:56.937556  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:00:59.437273  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:01.437472  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:03.437854  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:05.937387  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:07.937974  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:10.438106  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:12.937710  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:14.937965  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:17.437507  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:19.438103  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:21.937987  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:23.938056  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:26.437578  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:28.438261  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:30.937722  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:33.437293  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:35.437642  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:37.937744  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:40.437678  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:42.437923  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:44.437987  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:46.937265  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:49.436866  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:51.437349  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:53.438359  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:55.937214  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:58.437298  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:00.938071  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:03.437903  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:05.937716  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:08.437864  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:10.438463  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:12.938322  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:15.437650  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:17.438523  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:19.937736  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:21.937952  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:24.437850  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:26.937448  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:29.437316  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:31.438207  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:33.938001  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:36.437346  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:38.437594  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:40.937374  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:42.937487  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:44.937549  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:47.437717  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:49.937580  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:51.937802  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:54.437967  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:56.937548  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:59.437491  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:01.437951  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:03.937792  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:06.437308  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:08.438195  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:10.936974  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:12.937521  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:14.938030  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:17.438133  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:19.937883  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:22.437545  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:24.937562  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:26.937796  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:29.438021  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:31.937501  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:33.937898  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:36.438176  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:38.937161  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:40.938107  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:43.437266  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:45.437429  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:47.937225  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:50.437223  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:52.937254  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:55.437375  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:57.438418  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:59.937385  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:02.437352  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:04.437658  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:06.937189  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:09.437373  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:11.437468  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:13.937204  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:16.437206  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:18.437886  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:20.438103  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:22.937047  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:24.937427  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:27.437105  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:29.437276  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:31.437820  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:33.937680  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:35.937736  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:38.437903  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:40.937618  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:42.937757  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:45.437774  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:47.937195  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:50.437564  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:52.437750  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:54.936920  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:56.937154  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:58.937443  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:01.437319  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:03.437808  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:05.937874  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:08.438039  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:10.937114  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:12.937617  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:15.437197  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:17.937966  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:20.436943  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:22.437112  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:24.437935  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:26.937596  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:28.938029  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:31.437927  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:33.937288  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:35.937367  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:37.937895  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:40.438111  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:42.937140  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:45.437042  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:47.437615  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:49.937478  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:51.937528  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:54.437233  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:56.437925  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:58.936789  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:00.937673  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:02.938006  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:05.437466  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:07.936896  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:09.936963  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:11.937136  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:13.937556  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:15.937885  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:17.938008  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:20.437052  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:22.937018  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:24.937171  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:26.937444  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:28.937556  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:31.437141  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:33.437332  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:35.438037  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:37.937281  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:39.937969  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:42.436919  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:44.437329  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:46.437924  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:48.937399  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:50.938023  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:52.938100  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:55.437938  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:57.937246  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:59.937506  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:02.437419  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:04.937324  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:07.436974  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:09.437077  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:11.936955  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:13.937505  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:16.436626  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:18.437679  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:20.438007  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:22.938133  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:25.437713  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:27.437915  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:29.937424  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:32.437450  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:34.937171  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:36.937391  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:39.437273  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:41.438401  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:43.937009  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:45.937776  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:48.437009  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:50.437785  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:52.937852  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:55.437836  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:57.936991  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:00.437972  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:02.937812  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:05.437938  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:07.937728  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:09.938106  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:12.437298  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:14.937213  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:16.937551  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:19.437480  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:21.437646  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:23.937892  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:26.437152  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:28.437493  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:30.438117  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:32.937234  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:35.437427  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:37.936980  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:39.937223  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:42.437165  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:44.437447  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:46.936969  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:48.937027  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:50.938116  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:53.438051  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:55.937450  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:58.437220  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:00.437569  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:02.437722  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:04.437966  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:06.937695  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:09.437701  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:11.937086  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:13.937824  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:16.437931  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:18.938141  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:20.939971  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:23.437463  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:25.937377  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:28.437269  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:30.437955  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:32.937797  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:35.438015  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:37.937882  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:39.938160  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:42.437116  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:44.437854  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:46.937315  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:48.937538  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:50.937626  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:53.437642  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:55.937098  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:58.436982  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:00.936923  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:02.937655  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:05.438068  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:07.937548  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:10.437273  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:12.437532  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:14.937301  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:17.437163  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:19.437226  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:21.937051  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:23.937726  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:26.437253  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:28.437288  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:30.437779  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:32.937018  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:34.938104  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:37.437554  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:39.937712  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:42.437382  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:44.437743  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:46.937816  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:49.438014  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:51.937372  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:53.937840  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:56.437444  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:58.937084  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:00.937447  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:02.938129  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:05.437617  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:07.437667  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:09.438098  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:11.937390  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:14.437008  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:16.437976  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:18.937591  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:20.937833  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:23.437285  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:25.437412  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:27.437726  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:29.937574  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:32.437756  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:34.937450  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:37.437331  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:39.937470  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:42.436799  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:44.437513  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:46.937227  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:48.937900  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:51.436971  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:53.438014  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:55.439752  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:57.937708  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:00.437280  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:02.937934  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:05.437147  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:07.936981  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:10.436926  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:12.437055  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:14.437362  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:16.437991  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:18.937371  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:20.937530  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:22.937944  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:25.437991  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:27.936897  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:29.937105  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:31.937768  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:34.437874  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:36.937223  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:38.937967  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:41.437363  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:43.937464  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:46.437158  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:48.437465  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:50.437747  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:52.937856  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:55.437315  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:57.937279  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:00.437211  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:02.437431  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:04.938022  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:07.437698  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:09.938121  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:12.437154  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:14.437397  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:16.937072  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:18.937583  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:20.938176  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:23.437549  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:25.937311  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:27.937633  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:30.437210  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:32.437479  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:34.437554  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:36.938016  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:39.437404  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:41.437781  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:43.937221  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:45.937478  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:48.436953  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:50.437316  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:52.937412  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:54.937774  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:57.438168  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:59.937210  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:14:02.437333  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:14:04.437856  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	I0904 22:14:04.935455  695010 node_ready.go:38] duration metric: took 15m0.001042752s for node "calico-364928" to be "Ready" ...
	I0904 22:14:04.937400  695010 out.go:203] 
	W0904 22:14:04.938554  695010 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W0904 22:14:04.938569  695010 out.go:285] * 
	* 
	W0904 22:14:04.940189  695010 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 22:14:04.942277  695010 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (931.54s)
E0904 22:14:25.637631  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/old-k8s-version-001115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:14:35.143682  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:14:35.741062  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/custom-flannel-364928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:15:02.006029  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:15:13.969724  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/enable-default-cni-364928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:15:35.556297  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/no-preload-093695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:15:48.699888  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/old-k8s-version-001115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:16:06.087499  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/flannel-364928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-22q8g" [6e7da225-bc40-402a-aacd-963133c9e211] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:337: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-601847 -n default-k8s-diff-port-601847
start_stop_delete_test.go:272: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-09-04 22:07:39.801839658 +0000 UTC m=+4336.645442211
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context default-k8s-diff-port-601847 describe po kubernetes-dashboard-855c9754f9-22q8g -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context default-k8s-diff-port-601847 describe po kubernetes-dashboard-855c9754f9-22q8g -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-22q8g
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             default-k8s-diff-port-601847/192.168.76.2
Start Time:       Thu, 04 Sep 2025 21:58:03 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jkms5 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-jkms5:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  9m35s                  default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-22q8g to default-k8s-diff-port-601847
Warning  Failed     6m52s                  kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    3m39s (x5 over 9m35s)  kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     2m37s (x4 over 9m4s)   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     2m37s (x5 over 9m4s)   kubelet            Error: ErrImagePull
Warning  Failed     80s (x16 over 9m4s)    kubelet            Error: ImagePullBackOff
Normal   BackOff    13s (x21 over 9m4s)    kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context default-k8s-diff-port-601847 logs kubernetes-dashboard-855c9754f9-22q8g -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-601847 logs kubernetes-dashboard-855c9754f9-22q8g -n kubernetes-dashboard: exit status 1 (67.725783ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-22q8g" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context default-k8s-diff-port-601847 logs kubernetes-dashboard-855c9754f9-22q8g -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-601847
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-601847:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "07ce3aad696c6b9eaefbc8db9e1f9b916b577fe41fa5b2f14c0437ec2435ebf5",
	        "Created": "2025-09-04T21:56:17.062908981Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 689541,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-04T21:57:47.631588991Z",
	            "FinishedAt": "2025-09-04T21:57:46.041219709Z"
	        },
	        "Image": "sha256:26724cb1013292a869503ea8c35fa5a932e6ec3e4e85e318411373ae6a24478b",
	        "ResolvConfPath": "/var/lib/docker/containers/07ce3aad696c6b9eaefbc8db9e1f9b916b577fe41fa5b2f14c0437ec2435ebf5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/07ce3aad696c6b9eaefbc8db9e1f9b916b577fe41fa5b2f14c0437ec2435ebf5/hostname",
	        "HostsPath": "/var/lib/docker/containers/07ce3aad696c6b9eaefbc8db9e1f9b916b577fe41fa5b2f14c0437ec2435ebf5/hosts",
	        "LogPath": "/var/lib/docker/containers/07ce3aad696c6b9eaefbc8db9e1f9b916b577fe41fa5b2f14c0437ec2435ebf5/07ce3aad696c6b9eaefbc8db9e1f9b916b577fe41fa5b2f14c0437ec2435ebf5-json.log",
	        "Name": "/default-k8s-diff-port-601847",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-601847:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-601847",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "07ce3aad696c6b9eaefbc8db9e1f9b916b577fe41fa5b2f14c0437ec2435ebf5",
	                "LowerDir": "/var/lib/docker/overlay2/573fd9140cabb79a06017c368f52bd18455477eab6e266ebe7ef78d7685daff7-init/diff:/var/lib/docker/overlay2/fcc29a201369e5fb579bfafdafe90606e5b86bd2f4b52beb7f6a97f7e955214d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/573fd9140cabb79a06017c368f52bd18455477eab6e266ebe7ef78d7685daff7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/573fd9140cabb79a06017c368f52bd18455477eab6e266ebe7ef78d7685daff7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/573fd9140cabb79a06017c368f52bd18455477eab6e266ebe7ef78d7685daff7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-601847",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-601847/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-601847",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-601847",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-601847",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f616af140bcfdb4e2e508dd3522c97ac6e046eaba3b2aa145fdf514a9ded67dc",
	            "SandboxKey": "/var/run/docker/netns/f616af140bcf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33485"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33486"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33489"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33487"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33488"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-601847": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:da:fb:f1:a2:21",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bf07459402385f2fa05662d4e68f7943fbbac7763a63a2d6af5fc7bff0f17d6a",
	                    "EndpointID": "97a1d39ef2e905864d02978bb17fa696e389b4392166de25fb70e1fcba7c6911",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-601847",
	                        "07ce3aad696c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-601847 -n default-k8s-diff-port-601847
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-601847 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-601847 logs -n 25: (1.096737491s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────┬───────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                      │    PROFILE    │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────┼───────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-364928 sudo iptables -t nat -L -n -v                                 │ bridge-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:02 UTC │ 04 Sep 25 22:02 UTC │
	│ ssh     │ -p bridge-364928 sudo systemctl status kubelet --all --full --no-pager         │ bridge-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:02 UTC │ 04 Sep 25 22:02 UTC │
	│ ssh     │ -p bridge-364928 sudo systemctl cat kubelet --no-pager                         │ bridge-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:02 UTC │ 04 Sep 25 22:02 UTC │
	│ ssh     │ -p bridge-364928 sudo journalctl -xeu kubelet --all --full --no-pager          │ bridge-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:02 UTC │ 04 Sep 25 22:02 UTC │
	│ ssh     │ -p bridge-364928 sudo cat /etc/kubernetes/kubelet.conf                         │ bridge-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:02 UTC │ 04 Sep 25 22:02 UTC │
	│ ssh     │ -p bridge-364928 sudo cat /var/lib/kubelet/config.yaml                         │ bridge-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:02 UTC │ 04 Sep 25 22:02 UTC │
	│ ssh     │ -p bridge-364928 sudo systemctl status docker --all --full --no-pager          │ bridge-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:02 UTC │                     │
	│ ssh     │ -p bridge-364928 sudo systemctl cat docker --no-pager                          │ bridge-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:02 UTC │ 04 Sep 25 22:02 UTC │
	│ ssh     │ -p bridge-364928 sudo cat /etc/docker/daemon.json                              │ bridge-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:02 UTC │                     │
	│ ssh     │ -p bridge-364928 sudo docker system info                                       │ bridge-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:02 UTC │                     │
	│ ssh     │ -p bridge-364928 sudo systemctl status cri-docker --all --full --no-pager      │ bridge-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:02 UTC │                     │
	│ ssh     │ -p bridge-364928 sudo systemctl cat cri-docker --no-pager                      │ bridge-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:02 UTC │ 04 Sep 25 22:02 UTC │
	│ ssh     │ -p bridge-364928 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ bridge-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:02 UTC │                     │
	│ ssh     │ -p bridge-364928 sudo cat /usr/lib/systemd/system/cri-docker.service           │ bridge-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:02 UTC │ 04 Sep 25 22:02 UTC │
	│ ssh     │ -p bridge-364928 sudo cri-dockerd --version                                    │ bridge-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:02 UTC │ 04 Sep 25 22:02 UTC │
	│ ssh     │ -p bridge-364928 sudo systemctl status containerd --all --full --no-pager      │ bridge-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:02 UTC │                     │
	│ ssh     │ -p bridge-364928 sudo systemctl cat containerd --no-pager                      │ bridge-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:02 UTC │ 04 Sep 25 22:02 UTC │
	│ ssh     │ -p bridge-364928 sudo cat /lib/systemd/system/containerd.service               │ bridge-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:02 UTC │ 04 Sep 25 22:02 UTC │
	│ ssh     │ -p bridge-364928 sudo cat /etc/containerd/config.toml                          │ bridge-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:02 UTC │ 04 Sep 25 22:02 UTC │
	│ ssh     │ -p bridge-364928 sudo containerd config dump                                   │ bridge-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:02 UTC │ 04 Sep 25 22:02 UTC │
	│ ssh     │ -p bridge-364928 sudo systemctl status crio --all --full --no-pager            │ bridge-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:02 UTC │ 04 Sep 25 22:02 UTC │
	│ ssh     │ -p bridge-364928 sudo systemctl cat crio --no-pager                            │ bridge-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:02 UTC │ 04 Sep 25 22:02 UTC │
	│ ssh     │ -p bridge-364928 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ bridge-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:02 UTC │ 04 Sep 25 22:02 UTC │
	│ ssh     │ -p bridge-364928 sudo crio config                                              │ bridge-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:02 UTC │ 04 Sep 25 22:02 UTC │
	│ delete  │ -p bridge-364928                                                               │ bridge-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:02 UTC │ 04 Sep 25 22:02 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────┴───────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 22:00:42
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 22:00:42.061711  725643 out.go:360] Setting OutFile to fd 1 ...
	I0904 22:00:42.061808  725643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 22:00:42.061812  725643 out.go:374] Setting ErrFile to fd 2...
	I0904 22:00:42.061816  725643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 22:00:42.062020  725643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-384635/.minikube/bin
	I0904 22:00:42.062596  725643 out.go:368] Setting JSON to false
	I0904 22:00:42.063905  725643 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":13391,"bootTime":1757009851,"procs":295,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 22:00:42.064015  725643 start.go:140] virtualization: kvm guest
	I0904 22:00:42.066135  725643 out.go:179] * [bridge-364928] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 22:00:42.067371  725643 out.go:179]   - MINIKUBE_LOCATION=21490
	I0904 22:00:42.067374  725643 notify.go:220] Checking for updates...
	I0904 22:00:42.069971  725643 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 22:00:42.071190  725643 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21490-384635/kubeconfig
	I0904 22:00:42.072493  725643 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-384635/.minikube
	I0904 22:00:42.073696  725643 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 22:00:42.074894  725643 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 22:00:42.076414  725643 config.go:182] Loaded profile config "calico-364928": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 22:00:42.076525  725643 config.go:182] Loaded profile config "default-k8s-diff-port-601847": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 22:00:42.076621  725643 config.go:182] Loaded profile config "flannel-364928": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 22:00:42.076780  725643 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 22:00:42.102652  725643 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 22:00:42.102759  725643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 22:00:42.150839  725643 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-09-04 22:00:42.141763804 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 22:00:42.150935  725643 docker.go:318] overlay module found
	I0904 22:00:42.152627  725643 out.go:179] * Using the docker driver based on user configuration
	I0904 22:00:42.153765  725643 start.go:304] selected driver: docker
	I0904 22:00:42.153783  725643 start.go:918] validating driver "docker" against <nil>
	I0904 22:00:42.153795  725643 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 22:00:42.154647  725643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 22:00:42.205855  725643 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-09-04 22:00:42.19598252 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 22:00:42.206031  725643 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0904 22:00:42.206240  725643 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 22:00:42.208042  725643 out.go:179] * Using Docker driver with root privileges
	I0904 22:00:42.209370  725643 cni.go:84] Creating CNI manager for "bridge"
	I0904 22:00:42.209392  725643 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0904 22:00:42.209474  725643 start.go:348] cluster config:
	{Name:bridge-364928 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-364928 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I0904 22:00:42.210894  725643 out.go:179] * Starting "bridge-364928" primary control-plane node in "bridge-364928" cluster
	I0904 22:00:42.212036  725643 cache.go:123] Beginning downloading kic base image for docker with crio
	I0904 22:00:42.213197  725643 out.go:179] * Pulling base image v0.0.47-1756116447-21413 ...
	I0904 22:00:42.214197  725643 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 22:00:42.214239  725643 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21490-384635/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0904 22:00:42.214254  725643 cache.go:58] Caching tarball of preloaded images
	I0904 22:00:42.214284  725643 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 in local docker daemon
	I0904 22:00:42.214336  725643 preload.go:172] Found /home/jenkins/minikube-integration/21490-384635/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0904 22:00:42.214347  725643 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0904 22:00:42.214433  725643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/config.json ...
	I0904 22:00:42.214451  725643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/config.json: {Name:mk330067f00b63e01efe897148f5319c2e1cf180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 22:00:42.234667  725643 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 in local docker daemon, skipping pull
	I0904 22:00:42.234693  725643 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 exists in daemon, skipping load
	I0904 22:00:42.234714  725643 cache.go:232] Successfully downloaded all kic artifacts
	I0904 22:00:42.234743  725643 start.go:360] acquireMachinesLock for bridge-364928: {Name:mk2c01c5b822bc2f5bd831325c7a96dfddb208a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 22:00:42.234861  725643 start.go:364] duration metric: took 88.836µs to acquireMachinesLock for "bridge-364928"
	I0904 22:00:42.234889  725643 start.go:93] Provisioning new machine with config: &{Name:bridge-364928 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-364928 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 22:00:42.234993  725643 start.go:125] createHost starting for "" (driver="docker")
	W0904 22:00:39.937960  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:00:42.437821  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:00:39.959283  718003 node_ready.go:57] node "flannel-364928" has "Ready":"False" status (will retry)
	W0904 22:00:42.456319  718003 node_ready.go:57] node "flannel-364928" has "Ready":"False" status (will retry)
	I0904 22:00:42.237290  725643 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0904 22:00:42.237548  725643 start.go:159] libmachine.API.Create for "bridge-364928" (driver="docker")
	I0904 22:00:42.237584  725643 client.go:168] LocalClient.Create starting
	I0904 22:00:42.237648  725643 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem
	I0904 22:00:42.237694  725643 main.go:141] libmachine: Decoding PEM data...
	I0904 22:00:42.237717  725643 main.go:141] libmachine: Parsing certificate...
	I0904 22:00:42.237798  725643 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21490-384635/.minikube/certs/cert.pem
	I0904 22:00:42.237832  725643 main.go:141] libmachine: Decoding PEM data...
	I0904 22:00:42.237850  725643 main.go:141] libmachine: Parsing certificate...
	I0904 22:00:42.238177  725643 cli_runner.go:164] Run: docker network inspect bridge-364928 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0904 22:00:42.255272  725643 cli_runner.go:211] docker network inspect bridge-364928 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0904 22:00:42.255342  725643 network_create.go:284] running [docker network inspect bridge-364928] to gather additional debugging logs...
	I0904 22:00:42.255366  725643 cli_runner.go:164] Run: docker network inspect bridge-364928
	W0904 22:00:42.272070  725643 cli_runner.go:211] docker network inspect bridge-364928 returned with exit code 1
	I0904 22:00:42.272101  725643 network_create.go:287] error running [docker network inspect bridge-364928]: docker network inspect bridge-364928: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network bridge-364928 not found
	I0904 22:00:42.272115  725643 network_create.go:289] output of [docker network inspect bridge-364928]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network bridge-364928 not found
	
	** /stderr **
	I0904 22:00:42.272236  725643 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0904 22:00:42.289985  725643 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5502e71d097a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:ef:c1:96:ed:36} reservation:<nil>}
	I0904 22:00:42.290961  725643 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e63f0d636ac7 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:63:34:a9:e4:57} reservation:<nil>}
	I0904 22:00:42.291514  725643 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-66f991fb509e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:12:87:15:f5:6e:d8} reservation:<nil>}
	I0904 22:00:42.292170  725643 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-bf0745940238 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:b6:9d:2a:98:20:f7} reservation:<nil>}
	I0904 22:00:42.292984  725643 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-9ad9d5939106 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:f6:07:97:7a:6d:cd} reservation:<nil>}
	I0904 22:00:42.293921  725643 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e75bc0}
	I0904 22:00:42.293948  725643 network_create.go:124] attempt to create docker network bridge-364928 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0904 22:00:42.293998  725643 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-364928 bridge-364928
	I0904 22:00:42.348216  725643 network_create.go:108] docker network bridge-364928 192.168.94.0/24 created
	I0904 22:00:42.348249  725643 kic.go:121] calculated static IP "192.168.94.2" for the "bridge-364928" container
	I0904 22:00:42.348334  725643 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0904 22:00:42.365412  725643 cli_runner.go:164] Run: docker volume create bridge-364928 --label name.minikube.sigs.k8s.io=bridge-364928 --label created_by.minikube.sigs.k8s.io=true
	I0904 22:00:42.382276  725643 oci.go:103] Successfully created a docker volume bridge-364928
	I0904 22:00:42.382353  725643 cli_runner.go:164] Run: docker run --rm --name bridge-364928-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-364928 --entrypoint /usr/bin/test -v bridge-364928:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 -d /var/lib
	I0904 22:00:42.821548  725643 oci.go:107] Successfully prepared a docker volume bridge-364928
	I0904 22:00:42.821620  725643 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 22:00:42.821653  725643 kic.go:194] Starting extracting preloaded images to volume ...
	I0904 22:00:42.821718  725643 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21490-384635/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v bridge-364928:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 -I lz4 -xf /preloaded.tar -C /extractDir
	W0904 22:00:44.437988  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:00:46.937817  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:00:44.956079  718003 node_ready.go:57] node "flannel-364928" has "Ready":"False" status (will retry)
	W0904 22:00:46.956390  718003 node_ready.go:57] node "flannel-364928" has "Ready":"False" status (will retry)
	W0904 22:00:48.956513  718003 node_ready.go:57] node "flannel-364928" has "Ready":"False" status (will retry)
	I0904 22:00:47.598360  725643 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21490-384635/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v bridge-364928:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 -I lz4 -xf /preloaded.tar -C /extractDir: (4.776586901s)
	I0904 22:00:47.598400  725643 kic.go:203] duration metric: took 4.776741896s to extract preloaded images to volume ...
	W0904 22:00:47.598558  725643 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0904 22:00:47.598682  725643 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0904 22:00:47.654953  725643 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname bridge-364928 --name bridge-364928 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-364928 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=bridge-364928 --network bridge-364928 --ip 192.168.94.2 --volume bridge-364928:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9
	I0904 22:00:47.961081  725643 cli_runner.go:164] Run: docker container inspect bridge-364928 --format={{.State.Running}}
	I0904 22:00:47.983935  725643 cli_runner.go:164] Run: docker container inspect bridge-364928 --format={{.State.Status}}
	I0904 22:00:48.006544  725643 cli_runner.go:164] Run: docker exec bridge-364928 stat /var/lib/dpkg/alternatives/iptables
	I0904 22:00:48.055370  725643 oci.go:144] the created container "bridge-364928" has a running status.
	I0904 22:00:48.055409  725643 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21490-384635/.minikube/machines/bridge-364928/id_rsa...
	I0904 22:00:48.938754  725643 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21490-384635/.minikube/machines/bridge-364928/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0904 22:00:48.959569  725643 cli_runner.go:164] Run: docker container inspect bridge-364928 --format={{.State.Status}}
	I0904 22:00:48.976653  725643 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0904 22:00:48.976678  725643 kic_runner.go:114] Args: [docker exec --privileged bridge-364928 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0904 22:00:49.017120  725643 cli_runner.go:164] Run: docker container inspect bridge-364928 --format={{.State.Status}}
	I0904 22:00:49.033524  725643 machine.go:93] provisionDockerMachine start ...
	I0904 22:00:49.033619  725643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-364928
	I0904 22:00:49.051447  725643 main.go:141] libmachine: Using SSH client type: native
	I0904 22:00:49.051790  725643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33510 <nil> <nil>}
	I0904 22:00:49.051814  725643 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 22:00:49.164290  725643 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-364928
	
	I0904 22:00:49.164322  725643 ubuntu.go:182] provisioning hostname "bridge-364928"
	I0904 22:00:49.164378  725643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-364928
	I0904 22:00:49.182571  725643 main.go:141] libmachine: Using SSH client type: native
	I0904 22:00:49.182793  725643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33510 <nil> <nil>}
	I0904 22:00:49.182808  725643 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-364928 && echo "bridge-364928" | sudo tee /etc/hostname
	I0904 22:00:49.308328  725643 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-364928
	
	I0904 22:00:49.308406  725643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-364928
	I0904 22:00:49.325968  725643 main.go:141] libmachine: Using SSH client type: native
	I0904 22:00:49.326207  725643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33510 <nil> <nil>}
	I0904 22:00:49.326235  725643 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-364928' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-364928/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-364928' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 22:00:49.441060  725643 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 22:00:49.441089  725643 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21490-384635/.minikube CaCertPath:/home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21490-384635/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21490-384635/.minikube}
	I0904 22:00:49.441143  725643 ubuntu.go:190] setting up certificates
	I0904 22:00:49.441164  725643 provision.go:84] configureAuth start
	I0904 22:00:49.441220  725643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-364928
	I0904 22:00:49.458619  725643 provision.go:143] copyHostCerts
	I0904 22:00:49.458681  725643 exec_runner.go:144] found /home/jenkins/minikube-integration/21490-384635/.minikube/ca.pem, removing ...
	I0904 22:00:49.458695  725643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21490-384635/.minikube/ca.pem
	I0904 22:00:49.458767  725643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21490-384635/.minikube/ca.pem (1078 bytes)
	I0904 22:00:49.458865  725643 exec_runner.go:144] found /home/jenkins/minikube-integration/21490-384635/.minikube/cert.pem, removing ...
	I0904 22:00:49.458877  725643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21490-384635/.minikube/cert.pem
	I0904 22:00:49.458915  725643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21490-384635/.minikube/cert.pem (1123 bytes)
	I0904 22:00:49.458985  725643 exec_runner.go:144] found /home/jenkins/minikube-integration/21490-384635/.minikube/key.pem, removing ...
	I0904 22:00:49.458995  725643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21490-384635/.minikube/key.pem
	I0904 22:00:49.459028  725643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21490-384635/.minikube/key.pem (1675 bytes)
	I0904 22:00:49.459092  725643 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21490-384635/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca-key.pem org=jenkins.bridge-364928 san=[127.0.0.1 192.168.94.2 bridge-364928 localhost minikube]
	I0904 22:00:49.825343  725643 provision.go:177] copyRemoteCerts
	I0904 22:00:49.825399  725643 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 22:00:49.825447  725643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-364928
	I0904 22:00:49.843070  725643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/bridge-364928/id_rsa Username:docker}
	I0904 22:00:49.929288  725643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0904 22:00:49.952111  725643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0904 22:00:49.974969  725643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0904 22:00:49.997241  725643 provision.go:87] duration metric: took 556.058591ms to configureAuth
	I0904 22:00:49.997273  725643 ubuntu.go:206] setting minikube options for container-runtime
	I0904 22:00:49.997427  725643 config.go:182] Loaded profile config "bridge-364928": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 22:00:49.997529  725643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-364928
	I0904 22:00:50.015408  725643 main.go:141] libmachine: Using SSH client type: native
	I0904 22:00:50.015622  725643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33510 <nil> <nil>}
	I0904 22:00:50.015639  725643 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0904 22:00:50.220444  725643 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0904 22:00:50.220474  725643 machine.go:96] duration metric: took 1.186921603s to provisionDockerMachine
	I0904 22:00:50.220487  725643 client.go:171] duration metric: took 7.982895636s to LocalClient.Create
	I0904 22:00:50.220512  725643 start.go:167] duration metric: took 7.982966048s to libmachine.API.Create "bridge-364928"
	I0904 22:00:50.220524  725643 start.go:293] postStartSetup for "bridge-364928" (driver="docker")
	I0904 22:00:50.220536  725643 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 22:00:50.220599  725643 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 22:00:50.220651  725643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-364928
	I0904 22:00:50.241803  725643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/bridge-364928/id_rsa Username:docker}
	I0904 22:00:50.334134  725643 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 22:00:50.337257  725643 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0904 22:00:50.337291  725643 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0904 22:00:50.337298  725643 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0904 22:00:50.337305  725643 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0904 22:00:50.337325  725643 filesync.go:126] Scanning /home/jenkins/minikube-integration/21490-384635/.minikube/addons for local assets ...
	I0904 22:00:50.337372  725643 filesync.go:126] Scanning /home/jenkins/minikube-integration/21490-384635/.minikube/files for local assets ...
	I0904 22:00:50.337455  725643 filesync.go:149] local asset: /home/jenkins/minikube-integration/21490-384635/.minikube/files/etc/ssl/certs/3883602.pem -> 3883602.pem in /etc/ssl/certs
	I0904 22:00:50.337544  725643 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0904 22:00:50.345491  725643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/files/etc/ssl/certs/3883602.pem --> /etc/ssl/certs/3883602.pem (1708 bytes)
	I0904 22:00:50.371533  725643 start.go:296] duration metric: took 150.993143ms for postStartSetup
	I0904 22:00:50.371870  725643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-364928
	I0904 22:00:50.393981  725643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/config.json ...
	I0904 22:00:50.394238  725643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 22:00:50.394299  725643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-364928
	I0904 22:00:50.412484  725643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/bridge-364928/id_rsa Username:docker}
	I0904 22:00:50.497713  725643 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0904 22:00:50.501722  725643 start.go:128] duration metric: took 8.266714891s to createHost
	I0904 22:00:50.501744  725643 start.go:83] releasing machines lock for "bridge-364928", held for 8.266868723s
	I0904 22:00:50.501798  725643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-364928
	I0904 22:00:50.519244  725643 ssh_runner.go:195] Run: cat /version.json
	I0904 22:00:50.519277  725643 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 22:00:50.519295  725643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-364928
	I0904 22:00:50.519345  725643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-364928
	I0904 22:00:50.536666  725643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/bridge-364928/id_rsa Username:docker}
	I0904 22:00:50.537512  725643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/bridge-364928/id_rsa Username:docker}
	I0904 22:00:50.694681  725643 ssh_runner.go:195] Run: systemctl --version
	I0904 22:00:50.699410  725643 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0904 22:00:50.839250  725643 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0904 22:00:50.844270  725643 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 22:00:50.864344  725643 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0904 22:00:50.864415  725643 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 22:00:50.896347  725643 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0904 22:00:50.896373  725643 start.go:495] detecting cgroup driver to use...
	I0904 22:00:50.896404  725643 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0904 22:00:50.896478  725643 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 22:00:50.911356  725643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 22:00:50.921621  725643 docker.go:218] disabling cri-docker service (if available) ...
	I0904 22:00:50.921669  725643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0904 22:00:50.933458  725643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0904 22:00:50.946984  725643 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0904 22:00:51.029569  725643 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0904 22:00:51.108914  725643 docker.go:234] disabling docker service ...
	I0904 22:00:51.108986  725643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0904 22:00:51.126466  725643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0904 22:00:51.136964  725643 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0904 22:00:51.210893  725643 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0904 22:00:51.300707  725643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 22:00:51.311330  725643 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 22:00:51.326012  725643 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0904 22:00:51.326070  725643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 22:00:51.334794  725643 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0904 22:00:51.334848  725643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 22:00:51.343610  725643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 22:00:51.352356  725643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 22:00:51.362048  725643 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 22:00:51.371363  725643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 22:00:51.383275  725643 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 22:00:51.398675  725643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 22:00:51.407686  725643 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 22:00:51.415159  725643 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 22:00:51.422772  725643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 22:00:51.502548  725643 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0904 22:00:51.620440  725643 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0904 22:00:51.620505  725643 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0904 22:00:51.623952  725643 start.go:563] Will wait 60s for crictl version
	I0904 22:00:51.623999  725643 ssh_runner.go:195] Run: which crictl
	I0904 22:00:51.627189  725643 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 22:00:51.662090  725643 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0904 22:00:51.662171  725643 ssh_runner.go:195] Run: crio --version
	I0904 22:00:51.697178  725643 ssh_runner.go:195] Run: crio --version
	I0904 22:00:51.731546  725643 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0904 22:00:51.732569  725643 cli_runner.go:164] Run: docker network inspect bridge-364928 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0904 22:00:51.748597  725643 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0904 22:00:51.752505  725643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 22:00:51.763994  725643 kubeadm.go:875] updating cluster {Name:bridge-364928 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-364928 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0904 22:00:51.764113  725643 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 22:00:51.764171  725643 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 22:00:51.833376  725643 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 22:00:51.833400  725643 crio.go:433] Images already preloaded, skipping extraction
	I0904 22:00:51.833456  725643 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 22:00:51.866515  725643 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 22:00:51.866536  725643 cache_images.go:85] Images are preloaded, skipping loading
	I0904 22:00:51.866544  725643 kubeadm.go:926] updating node { 192.168.94.2 8443 v1.34.0 crio true true} ...
	I0904 22:00:51.866627  725643 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=bridge-364928 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:bridge-364928 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0904 22:00:51.866688  725643 ssh_runner.go:195] Run: crio config
	I0904 22:00:51.910430  725643 cni.go:84] Creating CNI manager for "bridge"
	I0904 22:00:51.910451  725643 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0904 22:00:51.910489  725643 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-364928 NodeName:bridge-364928 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0904 22:00:51.910619  725643 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-364928"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0904 22:00:51.910673  725643 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0904 22:00:51.919287  725643 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 22:00:51.919344  725643 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0904 22:00:51.927126  725643 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0904 22:00:51.943313  725643 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 22:00:51.959657  725643 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I0904 22:00:51.976125  725643 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0904 22:00:51.979320  725643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 22:00:51.989293  725643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W0904 22:00:49.438116  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:00:51.938028  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	I0904 22:00:50.956390  718003 node_ready.go:49] node "flannel-364928" is "Ready"
	I0904 22:00:50.956416  718003 node_ready.go:38] duration metric: took 20.003181245s for node "flannel-364928" to be "Ready" ...
	I0904 22:00:50.956430  718003 api_server.go:52] waiting for apiserver process to appear ...
	I0904 22:00:50.956473  718003 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 22:00:50.967984  718003 api_server.go:72] duration metric: took 20.974991113s to wait for apiserver process to appear ...
	I0904 22:00:50.968016  718003 api_server.go:88] waiting for apiserver healthz status ...
	I0904 22:00:50.968043  718003 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0904 22:00:50.972466  718003 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0904 22:00:50.973547  718003 api_server.go:141] control plane version: v1.34.0
	I0904 22:00:50.973571  718003 api_server.go:131] duration metric: took 5.546507ms to wait for apiserver health ...
	I0904 22:00:50.973581  718003 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 22:00:50.977344  718003 system_pods.go:59] 7 kube-system pods found
	I0904 22:00:50.977374  718003 system_pods.go:61] "coredns-66bc5c9577-xsdj5" [c61291ea-1a68-4588-9c21-4d2fddd5841c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:00:50.977380  718003 system_pods.go:61] "etcd-flannel-364928" [fbd939e4-dd24-4ebe-9a25-f4cecc315787] Running
	I0904 22:00:50.977385  718003 system_pods.go:61] "kube-apiserver-flannel-364928" [e170c8ad-8f30-49ee-909a-2c034bb8cb84] Running
	I0904 22:00:50.977389  718003 system_pods.go:61] "kube-controller-manager-flannel-364928" [e0b88933-1bb3-4b5d-839f-c994622fe176] Running
	I0904 22:00:50.977393  718003 system_pods.go:61] "kube-proxy-6gcgv" [98e7c2fb-9c8f-46b2-8fe9-00fb5f691e35] Running
	I0904 22:00:50.977397  718003 system_pods.go:61] "kube-scheduler-flannel-364928" [df038abc-0fe9-4b20-b9d5-b6d07cc5d63e] Running
	I0904 22:00:50.977402  718003 system_pods.go:61] "storage-provisioner" [82eb6b3c-f3a8-4bc8-af68-10771f976b77] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 22:00:50.977413  718003 system_pods.go:74] duration metric: took 3.825887ms to wait for pod list to return data ...
	I0904 22:00:50.977421  718003 default_sa.go:34] waiting for default service account to be created ...
	I0904 22:00:50.979777  718003 default_sa.go:45] found service account: "default"
	I0904 22:00:50.979795  718003 default_sa.go:55] duration metric: took 2.365481ms for default service account to be created ...
	I0904 22:00:50.979802  718003 system_pods.go:116] waiting for k8s-apps to be running ...
	I0904 22:00:50.982582  718003 system_pods.go:86] 7 kube-system pods found
	I0904 22:00:50.982615  718003 system_pods.go:89] "coredns-66bc5c9577-xsdj5" [c61291ea-1a68-4588-9c21-4d2fddd5841c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:00:50.982623  718003 system_pods.go:89] "etcd-flannel-364928" [fbd939e4-dd24-4ebe-9a25-f4cecc315787] Running
	I0904 22:00:50.982637  718003 system_pods.go:89] "kube-apiserver-flannel-364928" [e170c8ad-8f30-49ee-909a-2c034bb8cb84] Running
	I0904 22:00:50.982643  718003 system_pods.go:89] "kube-controller-manager-flannel-364928" [e0b88933-1bb3-4b5d-839f-c994622fe176] Running
	I0904 22:00:50.982651  718003 system_pods.go:89] "kube-proxy-6gcgv" [98e7c2fb-9c8f-46b2-8fe9-00fb5f691e35] Running
	I0904 22:00:50.982656  718003 system_pods.go:89] "kube-scheduler-flannel-364928" [df038abc-0fe9-4b20-b9d5-b6d07cc5d63e] Running
	I0904 22:00:50.982664  718003 system_pods.go:89] "storage-provisioner" [82eb6b3c-f3a8-4bc8-af68-10771f976b77] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 22:00:50.982695  718003 retry.go:31] will retry after 272.663724ms: missing components: kube-dns
	I0904 22:00:51.259377  718003 system_pods.go:86] 7 kube-system pods found
	I0904 22:00:51.259411  718003 system_pods.go:89] "coredns-66bc5c9577-xsdj5" [c61291ea-1a68-4588-9c21-4d2fddd5841c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:00:51.259417  718003 system_pods.go:89] "etcd-flannel-364928" [fbd939e4-dd24-4ebe-9a25-f4cecc315787] Running
	I0904 22:00:51.259422  718003 system_pods.go:89] "kube-apiserver-flannel-364928" [e170c8ad-8f30-49ee-909a-2c034bb8cb84] Running
	I0904 22:00:51.259428  718003 system_pods.go:89] "kube-controller-manager-flannel-364928" [e0b88933-1bb3-4b5d-839f-c994622fe176] Running
	I0904 22:00:51.259434  718003 system_pods.go:89] "kube-proxy-6gcgv" [98e7c2fb-9c8f-46b2-8fe9-00fb5f691e35] Running
	I0904 22:00:51.259438  718003 system_pods.go:89] "kube-scheduler-flannel-364928" [df038abc-0fe9-4b20-b9d5-b6d07cc5d63e] Running
	I0904 22:00:51.259447  718003 system_pods.go:89] "storage-provisioner" [82eb6b3c-f3a8-4bc8-af68-10771f976b77] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 22:00:51.259471  718003 retry.go:31] will retry after 344.99828ms: missing components: kube-dns
	I0904 22:00:51.608603  718003 system_pods.go:86] 7 kube-system pods found
	I0904 22:00:51.608634  718003 system_pods.go:89] "coredns-66bc5c9577-xsdj5" [c61291ea-1a68-4588-9c21-4d2fddd5841c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:00:51.608640  718003 system_pods.go:89] "etcd-flannel-364928" [fbd939e4-dd24-4ebe-9a25-f4cecc315787] Running
	I0904 22:00:51.608645  718003 system_pods.go:89] "kube-apiserver-flannel-364928" [e170c8ad-8f30-49ee-909a-2c034bb8cb84] Running
	I0904 22:00:51.608650  718003 system_pods.go:89] "kube-controller-manager-flannel-364928" [e0b88933-1bb3-4b5d-839f-c994622fe176] Running
	I0904 22:00:51.608655  718003 system_pods.go:89] "kube-proxy-6gcgv" [98e7c2fb-9c8f-46b2-8fe9-00fb5f691e35] Running
	I0904 22:00:51.608660  718003 system_pods.go:89] "kube-scheduler-flannel-364928" [df038abc-0fe9-4b20-b9d5-b6d07cc5d63e] Running
	I0904 22:00:51.608667  718003 system_pods.go:89] "storage-provisioner" [82eb6b3c-f3a8-4bc8-af68-10771f976b77] Running
	I0904 22:00:51.608689  718003 retry.go:31] will retry after 294.96852ms: missing components: kube-dns
	I0904 22:00:51.907526  718003 system_pods.go:86] 7 kube-system pods found
	I0904 22:00:51.907565  718003 system_pods.go:89] "coredns-66bc5c9577-xsdj5" [c61291ea-1a68-4588-9c21-4d2fddd5841c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:00:51.907574  718003 system_pods.go:89] "etcd-flannel-364928" [fbd939e4-dd24-4ebe-9a25-f4cecc315787] Running
	I0904 22:00:51.907583  718003 system_pods.go:89] "kube-apiserver-flannel-364928" [e170c8ad-8f30-49ee-909a-2c034bb8cb84] Running
	I0904 22:00:51.907590  718003 system_pods.go:89] "kube-controller-manager-flannel-364928" [e0b88933-1bb3-4b5d-839f-c994622fe176] Running
	I0904 22:00:51.907604  718003 system_pods.go:89] "kube-proxy-6gcgv" [98e7c2fb-9c8f-46b2-8fe9-00fb5f691e35] Running
	I0904 22:00:51.907613  718003 system_pods.go:89] "kube-scheduler-flannel-364928" [df038abc-0fe9-4b20-b9d5-b6d07cc5d63e] Running
	I0904 22:00:51.907619  718003 system_pods.go:89] "storage-provisioner" [82eb6b3c-f3a8-4bc8-af68-10771f976b77] Running
	I0904 22:00:51.907642  718003 retry.go:31] will retry after 416.023679ms: missing components: kube-dns
	I0904 22:00:52.327518  718003 system_pods.go:86] 7 kube-system pods found
	I0904 22:00:52.327549  718003 system_pods.go:89] "coredns-66bc5c9577-xsdj5" [c61291ea-1a68-4588-9c21-4d2fddd5841c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:00:52.327555  718003 system_pods.go:89] "etcd-flannel-364928" [fbd939e4-dd24-4ebe-9a25-f4cecc315787] Running
	I0904 22:00:52.327561  718003 system_pods.go:89] "kube-apiserver-flannel-364928" [e170c8ad-8f30-49ee-909a-2c034bb8cb84] Running
	I0904 22:00:52.327565  718003 system_pods.go:89] "kube-controller-manager-flannel-364928" [e0b88933-1bb3-4b5d-839f-c994622fe176] Running
	I0904 22:00:52.327570  718003 system_pods.go:89] "kube-proxy-6gcgv" [98e7c2fb-9c8f-46b2-8fe9-00fb5f691e35] Running
	I0904 22:00:52.327573  718003 system_pods.go:89] "kube-scheduler-flannel-364928" [df038abc-0fe9-4b20-b9d5-b6d07cc5d63e] Running
	I0904 22:00:52.327577  718003 system_pods.go:89] "storage-provisioner" [82eb6b3c-f3a8-4bc8-af68-10771f976b77] Running
	I0904 22:00:52.327592  718003 retry.go:31] will retry after 589.759743ms: missing components: kube-dns
	I0904 22:00:52.921004  718003 system_pods.go:86] 7 kube-system pods found
	I0904 22:00:52.921042  718003 system_pods.go:89] "coredns-66bc5c9577-xsdj5" [c61291ea-1a68-4588-9c21-4d2fddd5841c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:00:52.921053  718003 system_pods.go:89] "etcd-flannel-364928" [fbd939e4-dd24-4ebe-9a25-f4cecc315787] Running
	I0904 22:00:52.921062  718003 system_pods.go:89] "kube-apiserver-flannel-364928" [e170c8ad-8f30-49ee-909a-2c034bb8cb84] Running
	I0904 22:00:52.921069  718003 system_pods.go:89] "kube-controller-manager-flannel-364928" [e0b88933-1bb3-4b5d-839f-c994622fe176] Running
	I0904 22:00:52.921074  718003 system_pods.go:89] "kube-proxy-6gcgv" [98e7c2fb-9c8f-46b2-8fe9-00fb5f691e35] Running
	I0904 22:00:52.921079  718003 system_pods.go:89] "kube-scheduler-flannel-364928" [df038abc-0fe9-4b20-b9d5-b6d07cc5d63e] Running
	I0904 22:00:52.921084  718003 system_pods.go:89] "storage-provisioner" [82eb6b3c-f3a8-4bc8-af68-10771f976b77] Running
	I0904 22:00:52.921102  718003 retry.go:31] will retry after 599.459014ms: missing components: kube-dns
	I0904 22:00:53.524439  718003 system_pods.go:86] 7 kube-system pods found
	I0904 22:00:53.524470  718003 system_pods.go:89] "coredns-66bc5c9577-xsdj5" [c61291ea-1a68-4588-9c21-4d2fddd5841c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:00:53.524476  718003 system_pods.go:89] "etcd-flannel-364928" [fbd939e4-dd24-4ebe-9a25-f4cecc315787] Running
	I0904 22:00:53.524484  718003 system_pods.go:89] "kube-apiserver-flannel-364928" [e170c8ad-8f30-49ee-909a-2c034bb8cb84] Running
	I0904 22:00:53.524487  718003 system_pods.go:89] "kube-controller-manager-flannel-364928" [e0b88933-1bb3-4b5d-839f-c994622fe176] Running
	I0904 22:00:53.524495  718003 system_pods.go:89] "kube-proxy-6gcgv" [98e7c2fb-9c8f-46b2-8fe9-00fb5f691e35] Running
	I0904 22:00:53.524500  718003 system_pods.go:89] "kube-scheduler-flannel-364928" [df038abc-0fe9-4b20-b9d5-b6d07cc5d63e] Running
	I0904 22:00:53.524503  718003 system_pods.go:89] "storage-provisioner" [82eb6b3c-f3a8-4bc8-af68-10771f976b77] Running
	I0904 22:00:53.524527  718003 retry.go:31] will retry after 1.117785208s: missing components: kube-dns
	I0904 22:00:52.075854  725643 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 22:00:52.088323  725643 certs.go:68] Setting up /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928 for IP: 192.168.94.2
	I0904 22:00:52.088361  725643 certs.go:194] generating shared ca certs ...
	I0904 22:00:52.088390  725643 certs.go:226] acquiring lock for ca certs: {Name:mkab35eaf89c739a644dd45428dbbd1b30c489d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 22:00:52.088561  725643 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21490-384635/.minikube/ca.key
	I0904 22:00:52.088633  725643 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21490-384635/.minikube/proxy-client-ca.key
	I0904 22:00:52.088654  725643 certs.go:256] generating profile certs ...
	I0904 22:00:52.088724  725643 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/client.key
	I0904 22:00:52.088744  725643 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/client.crt with IP's: []
	I0904 22:00:52.147941  725643 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/client.crt ...
	I0904 22:00:52.147972  725643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/client.crt: {Name:mkba7540c100fed0888915e572b6d80906b46359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 22:00:52.148130  725643 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/client.key ...
	I0904 22:00:52.148141  725643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/client.key: {Name:mkd0c182442997f9aa8ab6a6b8658e4f65cbbe00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 22:00:52.148241  725643 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/apiserver.key.5cac4715
	I0904 22:00:52.148270  725643 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/apiserver.crt.5cac4715 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I0904 22:00:52.526926  725643 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/apiserver.crt.5cac4715 ...
	I0904 22:00:52.526954  725643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/apiserver.crt.5cac4715: {Name:mkcbc14de6c1e777100f7645a2847e2cb12945ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 22:00:52.527104  725643 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/apiserver.key.5cac4715 ...
	I0904 22:00:52.527116  725643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/apiserver.key.5cac4715: {Name:mkf82a7b61846a0bb437c8a848686f0e6b9429b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 22:00:52.527183  725643 certs.go:381] copying /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/apiserver.crt.5cac4715 -> /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/apiserver.crt
	I0904 22:00:52.527265  725643 certs.go:385] copying /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/apiserver.key.5cac4715 -> /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/apiserver.key
	I0904 22:00:52.527316  725643 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/proxy-client.key
	I0904 22:00:52.527331  725643 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/proxy-client.crt with IP's: []
	I0904 22:00:52.959678  725643 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/proxy-client.crt ...
	I0904 22:00:52.959710  725643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/proxy-client.crt: {Name:mke6d0b557e9e40bd71441fcfbbe9d49d796afba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 22:00:52.959889  725643 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/proxy-client.key ...
	I0904 22:00:52.959907  725643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/proxy-client.key: {Name:mk0ebd647bc20c5900a8ec663bd477f379fcacd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 22:00:52.960109  725643 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/388360.pem (1338 bytes)
	W0904 22:00:52.960158  725643 certs.go:480] ignoring /home/jenkins/minikube-integration/21490-384635/.minikube/certs/388360_empty.pem, impossibly tiny 0 bytes
	I0904 22:00:52.960175  725643 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca-key.pem (1679 bytes)
	I0904 22:00:52.960215  725643 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem (1078 bytes)
	I0904 22:00:52.960255  725643 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/cert.pem (1123 bytes)
	I0904 22:00:52.960283  725643 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/key.pem (1675 bytes)
	I0904 22:00:52.960330  725643 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/files/etc/ssl/certs/3883602.pem (1708 bytes)
	I0904 22:00:52.960992  725643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 22:00:52.984818  725643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0904 22:00:53.006464  725643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 22:00:53.028348  725643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0904 22:00:53.049941  725643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0904 22:00:53.071263  725643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0904 22:00:53.093408  725643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 22:00:53.115180  725643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0904 22:00:53.136534  725643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/files/etc/ssl/certs/3883602.pem --> /usr/share/ca-certificates/3883602.pem (1708 bytes)
	I0904 22:00:53.157876  725643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 22:00:53.178736  725643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/certs/388360.pem --> /usr/share/ca-certificates/388360.pem (1338 bytes)
	I0904 22:00:53.200638  725643 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0904 22:00:53.216451  725643 ssh_runner.go:195] Run: openssl version
	I0904 22:00:53.221277  725643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3883602.pem && ln -fs /usr/share/ca-certificates/3883602.pem /etc/ssl/certs/3883602.pem"
	I0904 22:00:53.229963  725643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3883602.pem
	I0904 22:00:53.233229  725643 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  4 21:07 /usr/share/ca-certificates/3883602.pem
	I0904 22:00:53.233277  725643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3883602.pem
	I0904 22:00:53.239423  725643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3883602.pem /etc/ssl/certs/3ec20f2e.0"
	I0904 22:00:53.248324  725643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 22:00:53.257087  725643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 22:00:53.260046  725643 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  4 20:56 /usr/share/ca-certificates/minikubeCA.pem
	I0904 22:00:53.260088  725643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 22:00:53.266224  725643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 22:00:53.274611  725643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/388360.pem && ln -fs /usr/share/ca-certificates/388360.pem /etc/ssl/certs/388360.pem"
	I0904 22:00:53.283006  725643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/388360.pem
	I0904 22:00:53.286102  725643 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  4 21:07 /usr/share/ca-certificates/388360.pem
	I0904 22:00:53.286141  725643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/388360.pem
	I0904 22:00:53.292491  725643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/388360.pem /etc/ssl/certs/51391683.0"
	I0904 22:00:53.300835  725643 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 22:00:53.303630  725643 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0904 22:00:53.303681  725643 kubeadm.go:392] StartCluster: {Name:bridge-364928 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-364928 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 22:00:53.303741  725643 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0904 22:00:53.303777  725643 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0904 22:00:53.336824  725643 cri.go:89] found id: ""
	I0904 22:00:53.336901  725643 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0904 22:00:53.344949  725643 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0904 22:00:53.353109  725643 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0904 22:00:53.353161  725643 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0904 22:00:53.360930  725643 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0904 22:00:53.360957  725643 kubeadm.go:157] found existing configuration files:
	
	I0904 22:00:53.360995  725643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0904 22:00:53.368620  725643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0904 22:00:53.368663  725643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0904 22:00:53.376353  725643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0904 22:00:53.385602  725643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0904 22:00:53.385658  725643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0904 22:00:53.394864  725643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0904 22:00:53.402681  725643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0904 22:00:53.402723  725643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0904 22:00:53.410219  725643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0904 22:00:53.417877  725643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0904 22:00:53.417921  725643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0904 22:00:53.425251  725643 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0904 22:00:53.478768  725643 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0904 22:00:53.479060  725643 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
	I0904 22:00:53.536925  725643 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0904 22:00:54.437978  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:00:56.937556  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	I0904 22:00:54.646509  718003 system_pods.go:86] 7 kube-system pods found
	I0904 22:00:54.646544  718003 system_pods.go:89] "coredns-66bc5c9577-xsdj5" [c61291ea-1a68-4588-9c21-4d2fddd5841c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:00:54.646551  718003 system_pods.go:89] "etcd-flannel-364928" [fbd939e4-dd24-4ebe-9a25-f4cecc315787] Running
	I0904 22:00:54.646558  718003 system_pods.go:89] "kube-apiserver-flannel-364928" [e170c8ad-8f30-49ee-909a-2c034bb8cb84] Running
	I0904 22:00:54.646562  718003 system_pods.go:89] "kube-controller-manager-flannel-364928" [e0b88933-1bb3-4b5d-839f-c994622fe176] Running
	I0904 22:00:54.646565  718003 system_pods.go:89] "kube-proxy-6gcgv" [98e7c2fb-9c8f-46b2-8fe9-00fb5f691e35] Running
	I0904 22:00:54.646569  718003 system_pods.go:89] "kube-scheduler-flannel-364928" [df038abc-0fe9-4b20-b9d5-b6d07cc5d63e] Running
	I0904 22:00:54.646572  718003 system_pods.go:89] "storage-provisioner" [82eb6b3c-f3a8-4bc8-af68-10771f976b77] Running
	I0904 22:00:54.646587  718003 retry.go:31] will retry after 1.326366412s: missing components: kube-dns
	I0904 22:00:55.976987  718003 system_pods.go:86] 7 kube-system pods found
	I0904 22:00:55.977020  718003 system_pods.go:89] "coredns-66bc5c9577-xsdj5" [c61291ea-1a68-4588-9c21-4d2fddd5841c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:00:55.977028  718003 system_pods.go:89] "etcd-flannel-364928" [fbd939e4-dd24-4ebe-9a25-f4cecc315787] Running
	I0904 22:00:55.977034  718003 system_pods.go:89] "kube-apiserver-flannel-364928" [e170c8ad-8f30-49ee-909a-2c034bb8cb84] Running
	I0904 22:00:55.977038  718003 system_pods.go:89] "kube-controller-manager-flannel-364928" [e0b88933-1bb3-4b5d-839f-c994622fe176] Running
	I0904 22:00:55.977044  718003 system_pods.go:89] "kube-proxy-6gcgv" [98e7c2fb-9c8f-46b2-8fe9-00fb5f691e35] Running
	I0904 22:00:55.977049  718003 system_pods.go:89] "kube-scheduler-flannel-364928" [df038abc-0fe9-4b20-b9d5-b6d07cc5d63e] Running
	I0904 22:00:55.977054  718003 system_pods.go:89] "storage-provisioner" [82eb6b3c-f3a8-4bc8-af68-10771f976b77] Running
	I0904 22:00:55.977074  718003 retry.go:31] will retry after 1.650931689s: missing components: kube-dns
	I0904 22:00:57.632745  718003 system_pods.go:86] 7 kube-system pods found
	I0904 22:00:57.632809  718003 system_pods.go:89] "coredns-66bc5c9577-xsdj5" [c61291ea-1a68-4588-9c21-4d2fddd5841c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:00:57.632817  718003 system_pods.go:89] "etcd-flannel-364928" [fbd939e4-dd24-4ebe-9a25-f4cecc315787] Running
	I0904 22:00:57.632826  718003 system_pods.go:89] "kube-apiserver-flannel-364928" [e170c8ad-8f30-49ee-909a-2c034bb8cb84] Running
	I0904 22:00:57.632832  718003 system_pods.go:89] "kube-controller-manager-flannel-364928" [e0b88933-1bb3-4b5d-839f-c994622fe176] Running
	I0904 22:00:57.632839  718003 system_pods.go:89] "kube-proxy-6gcgv" [98e7c2fb-9c8f-46b2-8fe9-00fb5f691e35] Running
	I0904 22:00:57.632846  718003 system_pods.go:89] "kube-scheduler-flannel-364928" [df038abc-0fe9-4b20-b9d5-b6d07cc5d63e] Running
	I0904 22:00:57.632852  718003 system_pods.go:89] "storage-provisioner" [82eb6b3c-f3a8-4bc8-af68-10771f976b77] Running
	I0904 22:00:57.632874  718003 retry.go:31] will retry after 1.867355783s: missing components: kube-dns
	W0904 22:00:59.437273  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:01.437472  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:03.437854  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	I0904 22:00:59.504432  718003 system_pods.go:86] 7 kube-system pods found
	I0904 22:00:59.504464  718003 system_pods.go:89] "coredns-66bc5c9577-xsdj5" [c61291ea-1a68-4588-9c21-4d2fddd5841c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:00:59.504470  718003 system_pods.go:89] "etcd-flannel-364928" [fbd939e4-dd24-4ebe-9a25-f4cecc315787] Running
	I0904 22:00:59.504477  718003 system_pods.go:89] "kube-apiserver-flannel-364928" [e170c8ad-8f30-49ee-909a-2c034bb8cb84] Running
	I0904 22:00:59.504481  718003 system_pods.go:89] "kube-controller-manager-flannel-364928" [e0b88933-1bb3-4b5d-839f-c994622fe176] Running
	I0904 22:00:59.504484  718003 system_pods.go:89] "kube-proxy-6gcgv" [98e7c2fb-9c8f-46b2-8fe9-00fb5f691e35] Running
	I0904 22:00:59.504487  718003 system_pods.go:89] "kube-scheduler-flannel-364928" [df038abc-0fe9-4b20-b9d5-b6d07cc5d63e] Running
	I0904 22:00:59.504490  718003 system_pods.go:89] "storage-provisioner" [82eb6b3c-f3a8-4bc8-af68-10771f976b77] Running
	I0904 22:00:59.504507  718003 retry.go:31] will retry after 2.650552146s: missing components: kube-dns
	I0904 22:01:02.160493  718003 system_pods.go:86] 7 kube-system pods found
	I0904 22:01:02.160537  718003 system_pods.go:89] "coredns-66bc5c9577-xsdj5" [c61291ea-1a68-4588-9c21-4d2fddd5841c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:01:02.160545  718003 system_pods.go:89] "etcd-flannel-364928" [fbd939e4-dd24-4ebe-9a25-f4cecc315787] Running
	I0904 22:01:02.160553  718003 system_pods.go:89] "kube-apiserver-flannel-364928" [e170c8ad-8f30-49ee-909a-2c034bb8cb84] Running
	I0904 22:01:02.160558  718003 system_pods.go:89] "kube-controller-manager-flannel-364928" [e0b88933-1bb3-4b5d-839f-c994622fe176] Running
	I0904 22:01:02.160565  718003 system_pods.go:89] "kube-proxy-6gcgv" [98e7c2fb-9c8f-46b2-8fe9-00fb5f691e35] Running
	I0904 22:01:02.160572  718003 system_pods.go:89] "kube-scheduler-flannel-364928" [df038abc-0fe9-4b20-b9d5-b6d07cc5d63e] Running
	I0904 22:01:02.160579  718003 system_pods.go:89] "storage-provisioner" [82eb6b3c-f3a8-4bc8-af68-10771f976b77] Running
	I0904 22:01:02.160597  718003 retry.go:31] will retry after 2.230843332s: missing components: kube-dns
	I0904 22:01:04.396266  718003 system_pods.go:86] 7 kube-system pods found
	I0904 22:01:04.396299  718003 system_pods.go:89] "coredns-66bc5c9577-xsdj5" [c61291ea-1a68-4588-9c21-4d2fddd5841c] Running
	I0904 22:01:04.396308  718003 system_pods.go:89] "etcd-flannel-364928" [fbd939e4-dd24-4ebe-9a25-f4cecc315787] Running
	I0904 22:01:04.396316  718003 system_pods.go:89] "kube-apiserver-flannel-364928" [e170c8ad-8f30-49ee-909a-2c034bb8cb84] Running
	I0904 22:01:04.396322  718003 system_pods.go:89] "kube-controller-manager-flannel-364928" [e0b88933-1bb3-4b5d-839f-c994622fe176] Running
	I0904 22:01:04.396326  718003 system_pods.go:89] "kube-proxy-6gcgv" [98e7c2fb-9c8f-46b2-8fe9-00fb5f691e35] Running
	I0904 22:01:04.396330  718003 system_pods.go:89] "kube-scheduler-flannel-364928" [df038abc-0fe9-4b20-b9d5-b6d07cc5d63e] Running
	I0904 22:01:04.396334  718003 system_pods.go:89] "storage-provisioner" [82eb6b3c-f3a8-4bc8-af68-10771f976b77] Running
	I0904 22:01:04.396346  718003 system_pods.go:126] duration metric: took 13.416536037s to wait for k8s-apps to be running ...
	I0904 22:01:04.396362  718003 system_svc.go:44] waiting for kubelet service to be running ....
	I0904 22:01:04.396415  718003 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 22:01:04.408216  718003 system_svc.go:56] duration metric: took 11.844154ms WaitForService to wait for kubelet
	I0904 22:01:04.408242  718003 kubeadm.go:578] duration metric: took 34.415258584s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 22:01:04.408261  718003 node_conditions.go:102] verifying NodePressure condition ...
	I0904 22:01:04.411053  718003 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0904 22:01:04.411082  718003 node_conditions.go:123] node cpu capacity is 8
	I0904 22:01:04.411106  718003 node_conditions.go:105] duration metric: took 2.840641ms to run NodePressure ...
	I0904 22:01:04.411123  718003 start.go:241] waiting for startup goroutines ...
	I0904 22:01:04.411137  718003 start.go:246] waiting for cluster config update ...
	I0904 22:01:04.411156  718003 start.go:255] writing updated cluster config ...
	I0904 22:01:04.411440  718003 ssh_runner.go:195] Run: rm -f paused
	I0904 22:01:04.414598  718003 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 22:01:04.418051  718003 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xsdj5" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:04.422193  718003 pod_ready.go:94] pod "coredns-66bc5c9577-xsdj5" is "Ready"
	I0904 22:01:04.422213  718003 pod_ready.go:86] duration metric: took 4.141947ms for pod "coredns-66bc5c9577-xsdj5" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:04.424148  718003 pod_ready.go:83] waiting for pod "etcd-flannel-364928" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:04.427889  718003 pod_ready.go:94] pod "etcd-flannel-364928" is "Ready"
	I0904 22:01:04.427910  718003 pod_ready.go:86] duration metric: took 3.744027ms for pod "etcd-flannel-364928" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:04.429825  718003 pod_ready.go:83] waiting for pod "kube-apiserver-flannel-364928" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:04.433469  718003 pod_ready.go:94] pod "kube-apiserver-flannel-364928" is "Ready"
	I0904 22:01:04.433487  718003 pod_ready.go:86] duration metric: took 3.630328ms for pod "kube-apiserver-flannel-364928" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:04.435175  718003 pod_ready.go:83] waiting for pod "kube-controller-manager-flannel-364928" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:05.609987  725643 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0904 22:01:05.610059  725643 kubeadm.go:310] [preflight] Running pre-flight checks
	I0904 22:01:05.610173  725643 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0904 22:01:05.610251  725643 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1083-gcp
	I0904 22:01:05.610283  725643 kubeadm.go:310] OS: Linux
	I0904 22:01:05.610354  725643 kubeadm.go:310] CGROUPS_CPU: enabled
	I0904 22:01:05.610415  725643 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0904 22:01:05.610517  725643 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0904 22:01:05.610610  725643 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0904 22:01:05.610672  725643 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0904 22:01:05.610746  725643 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0904 22:01:05.610848  725643 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0904 22:01:05.610924  725643 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0904 22:01:05.611009  725643 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0904 22:01:05.611105  725643 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0904 22:01:05.611207  725643 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0904 22:01:05.611351  725643 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0904 22:01:05.611435  725643 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0904 22:01:05.612866  725643 out.go:252]   - Generating certificates and keys ...
	I0904 22:01:05.612960  725643 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0904 22:01:05.613030  725643 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0904 22:01:05.613114  725643 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0904 22:01:05.613196  725643 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0904 22:01:05.613295  725643 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0904 22:01:05.613368  725643 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0904 22:01:05.613436  725643 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0904 22:01:05.613598  725643 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-364928 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0904 22:01:05.613681  725643 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0904 22:01:05.613818  725643 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-364928 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0904 22:01:05.613914  725643 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0904 22:01:05.614023  725643 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0904 22:01:05.614075  725643 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0904 22:01:05.614147  725643 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0904 22:01:05.614214  725643 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0904 22:01:05.614285  725643 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0904 22:01:05.614356  725643 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0904 22:01:05.614454  725643 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0904 22:01:05.614540  725643 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0904 22:01:05.614642  725643 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0904 22:01:05.614711  725643 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0904 22:01:05.615990  725643 out.go:252]   - Booting up control plane ...
	I0904 22:01:05.616070  725643 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0904 22:01:05.616152  725643 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0904 22:01:05.616240  725643 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0904 22:01:05.616363  725643 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0904 22:01:05.616467  725643 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0904 22:01:05.616605  725643 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0904 22:01:05.616724  725643 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0904 22:01:05.616826  725643 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0904 22:01:05.617011  725643 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0904 22:01:05.617172  725643 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0904 22:01:05.617263  725643 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501462395s
	I0904 22:01:05.617399  725643 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0904 22:01:05.617504  725643 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I0904 22:01:05.617635  725643 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0904 22:01:05.617763  725643 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0904 22:01:05.617887  725643 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 4.662270765s
	I0904 22:01:05.617988  725643 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 5.112965714s
	I0904 22:01:05.618071  725643 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.001393475s
	I0904 22:01:05.618233  725643 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0904 22:01:05.618406  725643 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0904 22:01:05.618503  725643 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0904 22:01:05.618769  725643 kubeadm.go:310] [mark-control-plane] Marking the node bridge-364928 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0904 22:01:05.618848  725643 kubeadm.go:310] [bootstrap-token] Using token: ceznz5.mk1uab4zkkryxz7h
	I0904 22:01:05.620043  725643 out.go:252]   - Configuring RBAC rules ...
	I0904 22:01:05.620133  725643 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0904 22:01:05.620226  725643 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0904 22:01:05.620353  725643 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0904 22:01:05.620460  725643 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0904 22:01:05.620567  725643 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0904 22:01:05.620644  725643 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0904 22:01:05.620824  725643 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0904 22:01:05.620897  725643 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0904 22:01:05.620965  725643 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0904 22:01:05.620974  725643 kubeadm.go:310] 
	I0904 22:01:05.621057  725643 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0904 22:01:05.621064  725643 kubeadm.go:310] 
	I0904 22:01:05.621159  725643 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0904 22:01:05.621170  725643 kubeadm.go:310] 
	I0904 22:01:05.621212  725643 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0904 22:01:05.621303  725643 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0904 22:01:05.621378  725643 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0904 22:01:05.621386  725643 kubeadm.go:310] 
	I0904 22:01:05.621459  725643 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0904 22:01:05.621471  725643 kubeadm.go:310] 
	I0904 22:01:05.621533  725643 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0904 22:01:05.621546  725643 kubeadm.go:310] 
	I0904 22:01:05.621628  725643 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0904 22:01:05.621734  725643 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0904 22:01:05.621829  725643 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0904 22:01:05.621838  725643 kubeadm.go:310] 
	I0904 22:01:05.621964  725643 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0904 22:01:05.622063  725643 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0904 22:01:05.622072  725643 kubeadm.go:310] 
	I0904 22:01:05.622184  725643 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ceznz5.mk1uab4zkkryxz7h \
	I0904 22:01:05.622338  725643 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:27309edecc409b6b2cac280161d86abdcb5cc9f2633f123f165aab1e20ce61c4 \
	I0904 22:01:05.622374  725643 kubeadm.go:310] 	--control-plane 
	I0904 22:01:05.622384  725643 kubeadm.go:310] 
	I0904 22:01:05.622481  725643 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0904 22:01:05.622489  725643 kubeadm.go:310] 
	I0904 22:01:05.622587  725643 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ceznz5.mk1uab4zkkryxz7h \
	I0904 22:01:05.622720  725643 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:27309edecc409b6b2cac280161d86abdcb5cc9f2633f123f165aab1e20ce61c4 
	I0904 22:01:05.622734  725643 cni.go:84] Creating CNI manager for "bridge"
	I0904 22:01:05.624222  725643 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0904 22:01:04.819522  718003 pod_ready.go:94] pod "kube-controller-manager-flannel-364928" is "Ready"
	I0904 22:01:04.819552  718003 pod_ready.go:86] duration metric: took 384.35874ms for pod "kube-controller-manager-flannel-364928" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:05.019215  718003 pod_ready.go:83] waiting for pod "kube-proxy-6gcgv" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:05.419377  718003 pod_ready.go:94] pod "kube-proxy-6gcgv" is "Ready"
	I0904 22:01:05.419405  718003 pod_ready.go:86] duration metric: took 400.163306ms for pod "kube-proxy-6gcgv" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:05.619621  718003 pod_ready.go:83] waiting for pod "kube-scheduler-flannel-364928" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:06.018394  718003 pod_ready.go:94] pod "kube-scheduler-flannel-364928" is "Ready"
	I0904 22:01:06.018422  718003 pod_ready.go:86] duration metric: took 398.77956ms for pod "kube-scheduler-flannel-364928" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:06.018432  718003 pod_ready.go:40] duration metric: took 1.603804111s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 22:01:06.063014  718003 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0904 22:01:06.064519  718003 out.go:179] * Done! kubectl is now configured to use "flannel-364928" cluster and "default" namespace by default
	I0904 22:01:05.625365  725643 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0904 22:01:05.634875  725643 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0904 22:01:05.652006  725643 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0904 22:01:05.652102  725643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 22:01:05.652107  725643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-364928 minikube.k8s.io/updated_at=2025_09_04T22_01_05_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=d82f852837f628b3930700b19196c39855cd258a minikube.k8s.io/name=bridge-364928 minikube.k8s.io/primary=true
	I0904 22:01:05.661462  725643 ops.go:34] apiserver oom_adj: -16
	I0904 22:01:05.773298  725643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 22:01:06.273667  725643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 22:01:06.773501  725643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W0904 22:01:05.937387  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:07.937974  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	I0904 22:01:07.273459  725643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 22:01:07.773661  725643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 22:01:08.273960  725643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 22:01:08.773999  725643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 22:01:09.273963  725643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 22:01:09.773954  725643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 22:01:10.274093  725643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 22:01:10.340605  725643 kubeadm.go:1105] duration metric: took 4.688564247s to wait for elevateKubeSystemPrivileges
	I0904 22:01:10.340646  725643 kubeadm.go:394] duration metric: took 17.036968675s to StartCluster
	I0904 22:01:10.340667  725643 settings.go:142] acquiring lock: {Name:mke06342cfb6705345a5c7324f763dc44aea4569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 22:01:10.340738  725643 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21490-384635/kubeconfig
	I0904 22:01:10.343387  725643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/kubeconfig: {Name:mk6b311573f3fade9cba8f894d5c9f5ca76d1e25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 22:01:10.343785  725643 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 22:01:10.344421  725643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0904 22:01:10.344491  725643 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0904 22:01:10.344901  725643 addons.go:69] Setting default-storageclass=true in profile "bridge-364928"
	I0904 22:01:10.344912  725643 addons.go:69] Setting storage-provisioner=true in profile "bridge-364928"
	I0904 22:01:10.344940  725643 addons.go:238] Setting addon storage-provisioner=true in "bridge-364928"
	I0904 22:01:10.344941  725643 config.go:182] Loaded profile config "bridge-364928": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 22:01:10.344937  725643 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-364928"
	I0904 22:01:10.345290  725643 host.go:66] Checking if "bridge-364928" exists ...
	I0904 22:01:10.345602  725643 cli_runner.go:164] Run: docker container inspect bridge-364928 --format={{.State.Status}}
	I0904 22:01:10.345827  725643 cli_runner.go:164] Run: docker container inspect bridge-364928 --format={{.State.Status}}
	I0904 22:01:10.346392  725643 out.go:179] * Verifying Kubernetes components...
	I0904 22:01:10.347519  725643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 22:01:10.369352  725643 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 22:01:10.370474  725643 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 22:01:10.370498  725643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0904 22:01:10.370548  725643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-364928
	I0904 22:01:10.375315  725643 addons.go:238] Setting addon default-storageclass=true in "bridge-364928"
	I0904 22:01:10.375361  725643 host.go:66] Checking if "bridge-364928" exists ...
	I0904 22:01:10.375809  725643 cli_runner.go:164] Run: docker container inspect bridge-364928 --format={{.State.Status}}
	I0904 22:01:10.400197  725643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/bridge-364928/id_rsa Username:docker}
	I0904 22:01:10.410137  725643 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0904 22:01:10.410164  725643 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0904 22:01:10.410219  725643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-364928
	I0904 22:01:10.427828  725643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/bridge-364928/id_rsa Username:docker}
	I0904 22:01:10.472181  725643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0904 22:01:10.555482  725643 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 22:01:10.569140  725643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0904 22:01:10.645600  725643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 22:01:11.167566  725643 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I0904 22:01:11.170739  725643 node_ready.go:35] waiting up to 15m0s for node "bridge-364928" to be "Ready" ...
	I0904 22:01:11.185103  725643 node_ready.go:49] node "bridge-364928" is "Ready"
	I0904 22:01:11.185130  725643 node_ready.go:38] duration metric: took 14.362304ms for node "bridge-364928" to be "Ready" ...
	I0904 22:01:11.185142  725643 api_server.go:52] waiting for apiserver process to appear ...
	I0904 22:01:11.185181  725643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 22:01:11.653679  725643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.008032734s)
	I0904 22:01:11.654120  725643 api_server.go:72] duration metric: took 1.310300353s to wait for apiserver process to appear ...
	I0904 22:01:11.654146  725643 api_server.go:88] waiting for apiserver healthz status ...
	I0904 22:01:11.654182  725643 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0904 22:01:11.656116  725643 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I0904 22:01:11.657567  725643 addons.go:514] duration metric: took 1.313087435s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0904 22:01:11.664106  725643 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0904 22:01:11.665281  725643 api_server.go:141] control plane version: v1.34.0
	I0904 22:01:11.665350  725643 api_server.go:131] duration metric: took 11.19419ms to wait for apiserver health ...
	I0904 22:01:11.665375  725643 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 22:01:11.670504  725643 system_pods.go:59] 8 kube-system pods found
	I0904 22:01:11.670580  725643 system_pods.go:61] "coredns-66bc5c9577-27hq7" [24c57b5d-e1c8-4b78-af23-ff35e32521fc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:01:11.670602  725643 system_pods.go:61] "coredns-66bc5c9577-5vtqt" [33265747-58b4-43d7-ac2d-5e64a076a7ca] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:01:11.670635  725643 system_pods.go:61] "etcd-bridge-364928" [08f011c1-a3f0-461f-b653-b935e1d4109a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 22:01:11.670662  725643 system_pods.go:61] "kube-apiserver-bridge-364928" [ac802c39-e179-4f40-a5be-0336a2f5ef17] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 22:01:11.670680  725643 system_pods.go:61] "kube-controller-manager-bridge-364928" [9270e531-bc78-4742-bcc1-95a87b90fbc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0904 22:01:11.670701  725643 system_pods.go:61] "kube-proxy-77sc2" [e727f448-5f92-4de0-bedf-0d85d90fe1be] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0904 22:01:11.670716  725643 system_pods.go:61] "kube-scheduler-bridge-364928" [e49ad4ee-4dcd-456c-8a05-40105005827b] Running
	I0904 22:01:11.670742  725643 system_pods.go:61] "storage-provisioner" [18d36717-28bc-478f-9969-3e84f691bb3c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 22:01:11.670765  725643 system_pods.go:74] duration metric: took 5.374761ms to wait for pod list to return data ...
	I0904 22:01:11.670784  725643 default_sa.go:34] waiting for default service account to be created ...
	I0904 22:01:11.671906  725643 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-364928" context rescaled to 1 replicas
	I0904 22:01:11.673734  725643 default_sa.go:45] found service account: "default"
	I0904 22:01:11.673751  725643 default_sa.go:55] duration metric: took 2.952919ms for default service account to be created ...
	I0904 22:01:11.673758  725643 system_pods.go:116] waiting for k8s-apps to be running ...
	I0904 22:01:11.677018  725643 system_pods.go:86] 8 kube-system pods found
	I0904 22:01:11.677048  725643 system_pods.go:89] "coredns-66bc5c9577-27hq7" [24c57b5d-e1c8-4b78-af23-ff35e32521fc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:01:11.677074  725643 system_pods.go:89] "coredns-66bc5c9577-5vtqt" [33265747-58b4-43d7-ac2d-5e64a076a7ca] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:01:11.677088  725643 system_pods.go:89] "etcd-bridge-364928" [08f011c1-a3f0-461f-b653-b935e1d4109a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 22:01:11.677098  725643 system_pods.go:89] "kube-apiserver-bridge-364928" [ac802c39-e179-4f40-a5be-0336a2f5ef17] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 22:01:11.677112  725643 system_pods.go:89] "kube-controller-manager-bridge-364928" [9270e531-bc78-4742-bcc1-95a87b90fbc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0904 22:01:11.677123  725643 system_pods.go:89] "kube-proxy-77sc2" [e727f448-5f92-4de0-bedf-0d85d90fe1be] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0904 22:01:11.677129  725643 system_pods.go:89] "kube-scheduler-bridge-364928" [e49ad4ee-4dcd-456c-8a05-40105005827b] Running
	I0904 22:01:11.677156  725643 system_pods.go:89] "storage-provisioner" [18d36717-28bc-478f-9969-3e84f691bb3c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 22:01:11.677195  725643 retry.go:31] will retry after 187.908151ms: missing components: kube-dns, kube-proxy
	I0904 22:01:11.869501  725643 system_pods.go:86] 8 kube-system pods found
	I0904 22:01:11.869539  725643 system_pods.go:89] "coredns-66bc5c9577-27hq7" [24c57b5d-e1c8-4b78-af23-ff35e32521fc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:01:11.869546  725643 system_pods.go:89] "coredns-66bc5c9577-5vtqt" [33265747-58b4-43d7-ac2d-5e64a076a7ca] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:01:11.869554  725643 system_pods.go:89] "etcd-bridge-364928" [08f011c1-a3f0-461f-b653-b935e1d4109a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 22:01:11.869559  725643 system_pods.go:89] "kube-apiserver-bridge-364928" [ac802c39-e179-4f40-a5be-0336a2f5ef17] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 22:01:11.869566  725643 system_pods.go:89] "kube-controller-manager-bridge-364928" [9270e531-bc78-4742-bcc1-95a87b90fbc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0904 22:01:11.869571  725643 system_pods.go:89] "kube-proxy-77sc2" [e727f448-5f92-4de0-bedf-0d85d90fe1be] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0904 22:01:11.869575  725643 system_pods.go:89] "kube-scheduler-bridge-364928" [e49ad4ee-4dcd-456c-8a05-40105005827b] Running
	I0904 22:01:11.869580  725643 system_pods.go:89] "storage-provisioner" [18d36717-28bc-478f-9969-3e84f691bb3c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 22:01:11.869595  725643 retry.go:31] will retry after 304.60066ms: missing components: kube-dns, kube-proxy
	W0904 22:01:10.438106  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:12.937710  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	I0904 22:01:12.178971  725643 system_pods.go:86] 8 kube-system pods found
	I0904 22:01:12.179012  725643 system_pods.go:89] "coredns-66bc5c9577-27hq7" [24c57b5d-e1c8-4b78-af23-ff35e32521fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:01:12.179039  725643 system_pods.go:89] "coredns-66bc5c9577-5vtqt" [33265747-58b4-43d7-ac2d-5e64a076a7ca] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:01:12.179051  725643 system_pods.go:89] "etcd-bridge-364928" [08f011c1-a3f0-461f-b653-b935e1d4109a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 22:01:12.179065  725643 system_pods.go:89] "kube-apiserver-bridge-364928" [ac802c39-e179-4f40-a5be-0336a2f5ef17] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 22:01:12.179072  725643 system_pods.go:89] "kube-controller-manager-bridge-364928" [9270e531-bc78-4742-bcc1-95a87b90fbc8] Running
	I0904 22:01:12.179079  725643 system_pods.go:89] "kube-proxy-77sc2" [e727f448-5f92-4de0-bedf-0d85d90fe1be] Running
	I0904 22:01:12.179084  725643 system_pods.go:89] "kube-scheduler-bridge-364928" [e49ad4ee-4dcd-456c-8a05-40105005827b] Running
	I0904 22:01:12.179091  725643 system_pods.go:89] "storage-provisioner" [18d36717-28bc-478f-9969-3e84f691bb3c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 22:01:12.179103  725643 system_pods.go:126] duration metric: took 505.338198ms to wait for k8s-apps to be running ...
	I0904 22:01:12.179118  725643 system_svc.go:44] waiting for kubelet service to be running ....
	I0904 22:01:12.179172  725643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 22:01:12.192434  725643 system_svc.go:56] duration metric: took 13.3042ms WaitForService to wait for kubelet
	I0904 22:01:12.192478  725643 kubeadm.go:578] duration metric: took 1.84865365s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 22:01:12.192504  725643 node_conditions.go:102] verifying NodePressure condition ...
	I0904 22:01:12.196327  725643 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0904 22:01:12.196360  725643 node_conditions.go:123] node cpu capacity is 8
	I0904 22:01:12.196375  725643 node_conditions.go:105] duration metric: took 3.864833ms to run NodePressure ...
	I0904 22:01:12.196390  725643 start.go:241] waiting for startup goroutines ...
	I0904 22:01:12.196405  725643 start.go:246] waiting for cluster config update ...
	I0904 22:01:12.196422  725643 start.go:255] writing updated cluster config ...
	I0904 22:01:12.196780  725643 ssh_runner.go:195] Run: rm -f paused
	I0904 22:01:12.200208  725643 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 22:01:12.203813  725643 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-27hq7" in "kube-system" namespace to be "Ready" or be gone ...
	W0904 22:01:14.207947  725643 pod_ready.go:104] pod "coredns-66bc5c9577-27hq7" is not "Ready", error: <nil>
	W0904 22:01:16.208606  725643 pod_ready.go:104] pod "coredns-66bc5c9577-27hq7" is not "Ready", error: <nil>
	W0904 22:01:14.937965  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:17.437507  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:18.208703  725643 pod_ready.go:104] pod "coredns-66bc5c9577-27hq7" is not "Ready", error: <nil>
	W0904 22:01:20.209541  725643 pod_ready.go:104] pod "coredns-66bc5c9577-27hq7" is not "Ready", error: <nil>
	W0904 22:01:19.438103  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:21.937987  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	I0904 22:01:22.206461  725643 pod_ready.go:99] pod "coredns-66bc5c9577-27hq7" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-27hq7" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-27hq7" not found
	I0904 22:01:22.206487  725643 pod_ready.go:86] duration metric: took 10.002650012s for pod "coredns-66bc5c9577-27hq7" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:22.206507  725643 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5vtqt" in "kube-system" namespace to be "Ready" or be gone ...
	W0904 22:01:24.211280  725643 pod_ready.go:104] pod "coredns-66bc5c9577-5vtqt" is not "Ready", error: <nil>
	W0904 22:01:26.211688  725643 pod_ready.go:104] pod "coredns-66bc5c9577-5vtqt" is not "Ready", error: <nil>
	W0904 22:01:23.938056  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:26.437578  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:28.438261  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:28.212139  725643 pod_ready.go:104] pod "coredns-66bc5c9577-5vtqt" is not "Ready", error: <nil>
	W0904 22:01:30.712196  725643 pod_ready.go:104] pod "coredns-66bc5c9577-5vtqt" is not "Ready", error: <nil>
	W0904 22:01:30.937722  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:33.437293  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:33.211435  725643 pod_ready.go:104] pod "coredns-66bc5c9577-5vtqt" is not "Ready", error: <nil>
	W0904 22:01:35.212002  725643 pod_ready.go:104] pod "coredns-66bc5c9577-5vtqt" is not "Ready", error: <nil>
	W0904 22:01:35.437642  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:37.937744  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:37.712311  725643 pod_ready.go:104] pod "coredns-66bc5c9577-5vtqt" is not "Ready", error: <nil>
	W0904 22:01:40.211486  725643 pod_ready.go:104] pod "coredns-66bc5c9577-5vtqt" is not "Ready", error: <nil>
	W0904 22:01:40.437678  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:42.437923  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:42.212241  725643 pod_ready.go:104] pod "coredns-66bc5c9577-5vtqt" is not "Ready", error: <nil>
	I0904 22:01:43.212168  725643 pod_ready.go:94] pod "coredns-66bc5c9577-5vtqt" is "Ready"
	I0904 22:01:43.212195  725643 pod_ready.go:86] duration metric: took 21.005679647s for pod "coredns-66bc5c9577-5vtqt" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:43.214551  725643 pod_ready.go:83] waiting for pod "etcd-bridge-364928" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:43.218080  725643 pod_ready.go:94] pod "etcd-bridge-364928" is "Ready"
	I0904 22:01:43.218103  725643 pod_ready.go:86] duration metric: took 3.531186ms for pod "etcd-bridge-364928" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:43.219887  725643 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-364928" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:43.223418  725643 pod_ready.go:94] pod "kube-apiserver-bridge-364928" is "Ready"
	I0904 22:01:43.223435  725643 pod_ready.go:86] duration metric: took 3.53062ms for pod "kube-apiserver-bridge-364928" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:43.225015  725643 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-364928" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:43.410438  725643 pod_ready.go:94] pod "kube-controller-manager-bridge-364928" is "Ready"
	I0904 22:01:43.410467  725643 pod_ready.go:86] duration metric: took 185.434485ms for pod "kube-controller-manager-bridge-364928" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:43.610582  725643 pod_ready.go:83] waiting for pod "kube-proxy-77sc2" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:44.011194  725643 pod_ready.go:94] pod "kube-proxy-77sc2" is "Ready"
	I0904 22:01:44.011223  725643 pod_ready.go:86] duration metric: took 400.613464ms for pod "kube-proxy-77sc2" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:44.211137  725643 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-364928" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:44.611081  725643 pod_ready.go:94] pod "kube-scheduler-bridge-364928" is "Ready"
	I0904 22:01:44.611106  725643 pod_ready.go:86] duration metric: took 399.940376ms for pod "kube-scheduler-bridge-364928" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:44.611116  725643 pod_ready.go:40] duration metric: took 32.410883034s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 22:01:44.654298  725643 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0904 22:01:44.655877  725643 out.go:179] * Done! kubectl is now configured to use "bridge-364928" cluster and "default" namespace by default
	W0904 22:01:44.437987  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:46.937265  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:49.436866  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:51.437349  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:53.438359  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:55.937214  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:58.437298  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:00.938071  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:03.437903  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:05.937716  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:08.437864  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:10.438463  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:12.938322  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:15.437650  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:17.438523  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:19.937736  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:21.937952  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:24.437850  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:26.937448  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:29.437316  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:31.438207  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:33.938001  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:36.437346  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:38.437594  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:40.937374  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:42.937487  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:44.937549  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:47.437717  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:49.937580  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:51.937802  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:54.437967  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:56.937548  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:59.437491  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:01.437951  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:03.937792  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:06.437308  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:08.438195  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:10.936974  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:12.937521  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:14.938030  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:17.438133  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:19.937883  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:22.437545  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:24.937562  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:26.937796  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:29.438021  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:31.937501  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:33.937898  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:36.438176  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:38.937161  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:40.938107  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:43.437266  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:45.437429  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:47.937225  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:50.437223  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:52.937254  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:55.437375  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:57.438418  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:59.937385  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:02.437352  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:04.437658  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:06.937189  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:09.437373  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:11.437468  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:13.937204  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:16.437206  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:18.437886  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:20.438103  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:22.937047  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:24.937427  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:27.437105  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:29.437276  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:31.437820  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:33.937680  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:35.937736  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:38.437903  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:40.937618  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:42.937757  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:45.437774  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:47.937195  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:50.437564  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:52.437750  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:54.936920  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:56.937154  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:58.937443  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:01.437319  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:03.437808  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:05.937874  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:08.438039  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:10.937114  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:12.937617  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:15.437197  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:17.937966  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:20.436943  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:22.437112  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:24.437935  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:26.937596  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:28.938029  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:31.437927  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:33.937288  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:35.937367  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:37.937895  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:40.438111  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:42.937140  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:45.437042  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:47.437615  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:49.937478  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:51.937528  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:54.437233  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:56.437925  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:58.936789  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:00.937673  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:02.938006  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:05.437466  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:07.936896  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:09.936963  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:11.937136  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:13.937556  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:15.937885  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:17.938008  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:20.437052  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:22.937018  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:24.937171  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:26.937444  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:28.937556  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:31.437141  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:33.437332  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:35.438037  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:37.937281  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:39.937969  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:42.436919  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:44.437329  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:46.437924  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:48.937399  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:50.938023  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:52.938100  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:55.437938  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:57.937246  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:59.937506  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:02.437419  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:04.937324  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:07.436974  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:09.437077  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:11.936955  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:13.937505  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:16.436626  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:18.437679  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:20.438007  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:22.938133  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:25.437713  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:27.437915  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:29.937424  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:32.437450  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:34.937171  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:36.937391  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Sep 04 22:06:11 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:06:11.883226282Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=144058a0-ab8e-4dbe-b37e-aa75d40e8872 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:06:19 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:06:19.882399337Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=d9aa0fe7-276c-44d5-a819-da07b550c056 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:06:19 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:06:19.882650513Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=d9aa0fe7-276c-44d5-a819-da07b550c056 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:06:25 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:06:25.882682096Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=0a3a62c0-60fc-4ad0-9e65-52df26c9a3fa name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:06:25 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:06:25.882957300Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=0a3a62c0-60fc-4ad0-9e65-52df26c9a3fa name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:06:34 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:06:34.882584167Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=868bfdc2-ab52-4207-9901-076fe2d8d2c0 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:06:34 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:06:34.882832565Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=868bfdc2-ab52-4207-9901-076fe2d8d2c0 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:06:37 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:06:37.882439757Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=4d6ebd9e-5739-4b4d-9ea4-3102a5e836b2 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:06:37 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:06:37.882852522Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=4d6ebd9e-5739-4b4d-9ea4-3102a5e836b2 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:06:45 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:06:45.882823752Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=dc78b889-c18b-4285-8290-b1b4398f99fe name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:06:45 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:06:45.883138366Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=dc78b889-c18b-4285-8290-b1b4398f99fe name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:06:48 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:06:48.882542799Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=2ae1596e-2d7c-45a2-9c52-0da41436145b name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:06:48 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:06:48.882793097Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=2ae1596e-2d7c-45a2-9c52-0da41436145b name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:07:00 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:07:00.882959655Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=9ae5b88b-5c08-40fb-ae37-124d26a8ec97 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:07:00 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:07:00.883285763Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=9ae5b88b-5c08-40fb-ae37-124d26a8ec97 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:07:03 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:07:03.882267353Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=8c695f59-2528-440a-ad2e-edb8f4713cea name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:07:03 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:07:03.882501571Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=8c695f59-2528-440a-ad2e-edb8f4713cea name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:07:13 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:07:13.882818173Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=1618eb33-c28d-4b34-b9b9-9cd36802eaa6 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:07:13 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:07:13.883092063Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=1618eb33-c28d-4b34-b9b9-9cd36802eaa6 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:07:17 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:07:17.882249267Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=09a944a2-b673-4bdd-a0e6-bd715da7ebe4 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:07:17 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:07:17.882496755Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=09a944a2-b673-4bdd-a0e6-bd715da7ebe4 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:07:26 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:07:26.882585263Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=695015c0-b5d4-40df-a5bc-b36623ae1897 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:07:26 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:07:26.882851622Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=695015c0-b5d4-40df-a5bc-b36623ae1897 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:07:30 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:07:30.882743800Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=d7ed7105-9c87-4b03-b132-e42e10265ecc name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:07:30 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:07:30.882970078Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=d7ed7105-9c87-4b03-b132-e42e10265ecc name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	f7e4f5e245eb6       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   3 minutes ago       Exited              dashboard-metrics-scraper   6                   5ea23c6fc0ebc       dashboard-metrics-scraper-6ffb444bf9-fz65t
	19865017dd694       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner         2                   bed1036522651       storage-provisioner
	fb9e4193d96f3       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   9 minutes ago       Running             coredns                     1                   5239fe21cafc6       coredns-66bc5c9577-6l9v7
	11f3a95d01801       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   9 minutes ago       Running             busybox                     1                   451e72cb34b06       busybox
	e18526edf6ba3       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   9 minutes ago       Running             kindnet-cni                 1                   ca975460210ec       kindnet-2c8sv
	9928e6b6e53c4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Exited              storage-provisioner         1                   bed1036522651       storage-provisioner
	b795726e2372e       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   9 minutes ago       Running             kube-proxy                  1                   b48722f5128a3       kube-proxy-zgdrw
	c085eb94106de       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   9 minutes ago       Running             kube-scheduler              1                   4d9e98de5611c       kube-scheduler-default-k8s-diff-port-601847
	16296337219c8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   9 minutes ago       Running             etcd                        1                   2ce4c093e7fd7       etcd-default-k8s-diff-port-601847
	0ff5410f92b61       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   9 minutes ago       Running             kube-controller-manager     1                   63ef34691dbfc       kube-controller-manager-default-k8s-diff-port-601847
	009da3d5b4890       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   9 minutes ago       Running             kube-apiserver              1                   4301395da7fc0       kube-apiserver-default-k8s-diff-port-601847
	
	
	==> coredns [fb9e4193d96f31c22cb27f97cf797a4a64b14bbcbb1648abc8512b4b3e07fc81] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57779 - 32163 "HINFO IN 648113122663838148.1941072893311661962. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.458609592s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-601847
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-601847
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d82f852837f628b3930700b19196c39855cd258a
	                    minikube.k8s.io/name=default-k8s-diff-port-601847
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_04T21_56_36_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Sep 2025 21:56:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-601847
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Sep 2025 22:07:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Sep 2025 22:06:07 +0000   Thu, 04 Sep 2025 21:56:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Sep 2025 22:06:07 +0000   Thu, 04 Sep 2025 21:56:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Sep 2025 22:06:07 +0000   Thu, 04 Sep 2025 21:56:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Sep 2025 22:06:07 +0000   Thu, 04 Sep 2025 21:57:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-601847
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	System Info:
	  Machine ID:                 f5df31fce7394b4db986c14ce48081e1
	  System UUID:                202b9b21-4e85-489b-b9fa-c1acfe66ebb3
	  Boot ID:                    d34ed5fc-a148-45de-9a0e-f744d5f792e8
	  Kernel Version:             5.15.0-1083-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-6l9v7                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-default-k8s-diff-port-601847                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-2c8sv                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-default-k8s-diff-port-601847             250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-601847    200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-zgdrw                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-default-k8s-diff-port-601847             100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-746fcd58dc-k7j78                         100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         10m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fz65t              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m37s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-22q8g                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   Starting                 9m41s                  kube-proxy       
	  Normal   Starting                 11m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node default-k8s-diff-port-601847 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node default-k8s-diff-port-601847 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)      kubelet          Node default-k8s-diff-port-601847 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    11m                    kubelet          Node default-k8s-diff-port-601847 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 11m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11m                    kubelet          Node default-k8s-diff-port-601847 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     11m                    kubelet          Node default-k8s-diff-port-601847 status is now: NodeHasSufficientPID
	  Normal   Starting                 11m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           11m                    node-controller  Node default-k8s-diff-port-601847 event: Registered Node default-k8s-diff-port-601847 in Controller
	  Normal   NodeReady                10m                    kubelet          Node default-k8s-diff-port-601847 status is now: NodeReady
	  Normal   Starting                 9m47s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m47s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m47s (x8 over 9m47s)  kubelet          Node default-k8s-diff-port-601847 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m47s (x8 over 9m47s)  kubelet          Node default-k8s-diff-port-601847 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m47s (x8 over 9m47s)  kubelet          Node default-k8s-diff-port-601847 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m37s                  node-controller  Node default-k8s-diff-port-601847 event: Registered Node default-k8s-diff-port-601847 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4e b5 06 e3 98 d4 08 06
	[ +11.067174] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 52 05 50 71 c8 97 08 06
	[  +0.000348] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 4e 9f 60 b8 d0 a4 08 06
	[Sep 4 22:00] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae c6 57 b4 5a ac 08 06
	[  +0.000332] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 7a 52 2a 9d 32 91 08 06
	[Sep 4 22:01] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca c1 7c bd 85 07 08 06
	[  +7.691011] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff da 20 b4 1f 35 71 08 06
	[  +0.517474] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff da 20 b4 1f 35 71 08 06
	[  +0.000824] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ea 94 02 98 e7 7a 08 06
	[  +9.031118] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e af ce f6 73 03 08 06
	[  +0.000308] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca c1 7c bd 85 07 08 06
	[ +32.638428] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8e e9 e7 47 0d 5c 08 06
	[  +0.000352] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff da 20 b4 1f 35 71 08 06
	
	
	==> etcd [16296337219c89f4129b435f9353f666fdd58ec04339099ecc4bb3f392a9c763] <==
	{"level":"warn","ts":"2025-09-04T21:57:56.352536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:57:56.359780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:57:56.366563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:57:56.372811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:57:56.378895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:57:56.392866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:57:56.450174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:57:56.457494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:57:56.464972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:57:56.471040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:57:56.477392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:57:56.485270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:57:56.491953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:57:56.550911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:57:56.558161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:57:56.565259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:57:56.571620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:57:56.600051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:57:56.645059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:57:56.651882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:57:56.754245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45464","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-04T21:58:37.656882Z","caller":"traceutil/trace.go:172","msg":"trace[988814331] transaction","detail":"{read_only:false; response_revision:689; number_of_response:1; }","duration":"170.493866ms","start":"2025-09-04T21:58:37.486243Z","end":"2025-09-04T21:58:37.656737Z","steps":["trace[988814331] 'process raft request'  (duration: 99.016179ms)","trace[988814331] 'compare'  (duration: 71.313612ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-04T21:58:52.066545Z","caller":"traceutil/trace.go:172","msg":"trace[1350173158] transaction","detail":"{read_only:false; response_revision:719; number_of_response:1; }","duration":"177.911555ms","start":"2025-09-04T21:58:51.888611Z","end":"2025-09-04T21:58:52.066523Z","steps":["trace[1350173158] 'process raft request'  (duration: 177.806952ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T22:00:45.291878Z","caller":"traceutil/trace.go:172","msg":"trace[1368556805] transaction","detail":"{read_only:false; response_revision:855; number_of_response:1; }","duration":"104.708392ms","start":"2025-09-04T22:00:45.187147Z","end":"2025-09-04T22:00:45.291856Z","steps":["trace[1368556805] 'process raft request'  (duration: 104.538694ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-04T22:00:46.225101Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.084458ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638355079021129573 > lease_revoke:<id:59069916bc7aef07>","response":"size:28"}
	
	
	==> kernel <==
	 22:07:41 up  3:50,  0 users,  load average: 0.21, 1.15, 1.81
	Linux default-k8s-diff-port-601847 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [e18526edf6ba391aabf631cee54d35ca7c972438d099f56f4a4c1145e634e4f8] <==
	I0904 22:05:39.555456       1 main.go:301] handling current node
	I0904 22:05:49.556864       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 22:05:49.556897       1 main.go:301] handling current node
	I0904 22:05:59.560851       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 22:05:59.560882       1 main.go:301] handling current node
	I0904 22:06:09.556258       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 22:06:09.556290       1 main.go:301] handling current node
	I0904 22:06:19.560828       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 22:06:19.560868       1 main.go:301] handling current node
	I0904 22:06:29.558368       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 22:06:29.558404       1 main.go:301] handling current node
	I0904 22:06:39.555666       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 22:06:39.555716       1 main.go:301] handling current node
	I0904 22:06:49.562640       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 22:06:49.562678       1 main.go:301] handling current node
	I0904 22:06:59.561223       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 22:06:59.561250       1 main.go:301] handling current node
	I0904 22:07:09.555677       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 22:07:09.555717       1 main.go:301] handling current node
	I0904 22:07:19.563756       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 22:07:19.563794       1 main.go:301] handling current node
	I0904 22:07:29.557094       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 22:07:29.557135       1 main.go:301] handling current node
	I0904 22:07:39.555672       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 22:07:39.555712       1 main.go:301] handling current node
	
	
	==> kube-apiserver [009da3d5b4890abf829a2b06e9ed211e4f39a80fde2b69482cbfddafbed269a3] <==
	I0904 22:03:04.480647       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0904 22:03:58.672746       1 handler_proxy.go:99] no RequestInfo found in the context
	E0904 22:03:58.672817       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0904 22:03:58.672831       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0904 22:03:58.672934       1 handler_proxy.go:99] no RequestInfo found in the context
	E0904 22:03:58.672997       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0904 22:03:58.674835       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0904 22:04:01.753091       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 22:04:28.293064       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 22:05:30.485437       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 22:05:51.469566       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0904 22:05:58.673508       1 handler_proxy.go:99] no RequestInfo found in the context
	E0904 22:05:58.673563       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0904 22:05:58.673579       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0904 22:05:58.675748       1 handler_proxy.go:99] no RequestInfo found in the context
	E0904 22:05:58.675831       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0904 22:05:58.675843       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0904 22:06:49.640276       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 22:07:16.935743       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [0ff5410f92b61c2b92597a4296a621756c4875edd4d63dd8954d55b3c17e657b] <==
	I0904 22:01:33.342282       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 22:02:03.314220       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 22:02:03.350061       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 22:02:33.317991       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 22:02:33.357061       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 22:03:03.322281       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 22:03:03.363503       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 22:03:33.326919       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 22:03:33.370955       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 22:04:03.332161       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 22:04:03.377295       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 22:04:33.336406       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 22:04:33.383333       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 22:05:03.340444       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 22:05:03.390003       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 22:05:33.344220       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 22:05:33.396117       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 22:06:03.349107       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 22:06:03.402442       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 22:06:33.353293       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 22:06:33.409381       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 22:07:03.357872       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 22:07:03.415427       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 22:07:33.361945       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 22:07:33.421804       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [b795726e2372e2c2155e88b5fbd736497e9121c883d31f583086c1e1d48edd92] <==
	I0904 21:57:59.352190       1 server_linux.go:53] "Using iptables proxy"
	I0904 21:57:59.578698       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0904 21:57:59.681074       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0904 21:57:59.681111       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0904 21:57:59.681223       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 21:57:59.761516       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0904 21:57:59.761574       1 server_linux.go:132] "Using iptables Proxier"
	I0904 21:57:59.766870       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 21:57:59.767298       1 server.go:527] "Version info" version="v1.34.0"
	I0904 21:57:59.767339       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 21:57:59.768586       1 config.go:200] "Starting service config controller"
	I0904 21:57:59.768612       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0904 21:57:59.768644       1 config.go:106] "Starting endpoint slice config controller"
	I0904 21:57:59.768662       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0904 21:57:59.768685       1 config.go:309] "Starting node config controller"
	I0904 21:57:59.768691       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0904 21:57:59.768698       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0904 21:57:59.769108       1 config.go:403] "Starting serviceCIDR config controller"
	I0904 21:57:59.769196       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0904 21:57:59.869220       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0904 21:57:59.869242       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0904 21:57:59.869274       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [c085eb94106dee9d8191474057cf3151b866fdfdfff4f11c0ecd257b24b464a1] <==
	I0904 21:57:55.764093       1 serving.go:386] Generated self-signed cert in-memory
	W0904 21:57:57.660706       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0904 21:57:57.660827       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0904 21:57:57.660867       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0904 21:57:57.660902       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0904 21:57:57.852123       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0904 21:57:57.852158       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 21:57:57.856441       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 21:57:57.856591       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0904 21:57:57.856724       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0904 21:57:57.856612       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 21:57:57.958069       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 04 22:06:54 default-k8s-diff-port-601847 kubelet[823]: E0904 22:06:54.027737     823 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757023614027520029  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 22:06:54 default-k8s-diff-port-601847 kubelet[823]: E0904 22:06:54.027780     823 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757023614027520029  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 22:07:00 default-k8s-diff-port-601847 kubelet[823]: I0904 22:07:00.882397     823 scope.go:117] "RemoveContainer" containerID="f7e4f5e245eb6419d0cbf926617bb20b6f92d05d3e1e92b2d7dde278ab2e7f1d"
	Sep 04 22:07:00 default-k8s-diff-port-601847 kubelet[823]: E0904 22:07:00.882563     823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fz65t_kubernetes-dashboard(32237f00-2ebe-43ce-89f8-a1e2e4c7a598)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fz65t" podUID="32237f00-2ebe-43ce-89f8-a1e2e4c7a598"
	Sep 04 22:07:00 default-k8s-diff-port-601847 kubelet[823]: E0904 22:07:00.883595     823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-22q8g" podUID="6e7da225-bc40-402a-aacd-963133c9e211"
	Sep 04 22:07:03 default-k8s-diff-port-601847 kubelet[823]: E0904 22:07:03.882813     823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-k7j78" podUID="4487a876-7f24-447b-afec-505bc3d62dbb"
	Sep 04 22:07:04 default-k8s-diff-port-601847 kubelet[823]: E0904 22:07:04.029583     823 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757023624029332441  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 22:07:04 default-k8s-diff-port-601847 kubelet[823]: E0904 22:07:04.029621     823 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757023624029332441  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 22:07:13 default-k8s-diff-port-601847 kubelet[823]: E0904 22:07:13.883391     823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-22q8g" podUID="6e7da225-bc40-402a-aacd-963133c9e211"
	Sep 04 22:07:14 default-k8s-diff-port-601847 kubelet[823]: E0904 22:07:14.030783     823 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757023634030561216  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 22:07:14 default-k8s-diff-port-601847 kubelet[823]: E0904 22:07:14.030821     823 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757023634030561216  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 22:07:14 default-k8s-diff-port-601847 kubelet[823]: I0904 22:07:14.881931     823 scope.go:117] "RemoveContainer" containerID="f7e4f5e245eb6419d0cbf926617bb20b6f92d05d3e1e92b2d7dde278ab2e7f1d"
	Sep 04 22:07:14 default-k8s-diff-port-601847 kubelet[823]: E0904 22:07:14.882123     823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fz65t_kubernetes-dashboard(32237f00-2ebe-43ce-89f8-a1e2e4c7a598)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fz65t" podUID="32237f00-2ebe-43ce-89f8-a1e2e4c7a598"
	Sep 04 22:07:17 default-k8s-diff-port-601847 kubelet[823]: E0904 22:07:17.882863     823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-k7j78" podUID="4487a876-7f24-447b-afec-505bc3d62dbb"
	Sep 04 22:07:24 default-k8s-diff-port-601847 kubelet[823]: E0904 22:07:24.031813     823 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757023644031600041  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 22:07:24 default-k8s-diff-port-601847 kubelet[823]: E0904 22:07:24.031847     823 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757023644031600041  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 22:07:25 default-k8s-diff-port-601847 kubelet[823]: I0904 22:07:25.882307     823 scope.go:117] "RemoveContainer" containerID="f7e4f5e245eb6419d0cbf926617bb20b6f92d05d3e1e92b2d7dde278ab2e7f1d"
	Sep 04 22:07:25 default-k8s-diff-port-601847 kubelet[823]: E0904 22:07:25.882524     823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fz65t_kubernetes-dashboard(32237f00-2ebe-43ce-89f8-a1e2e4c7a598)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fz65t" podUID="32237f00-2ebe-43ce-89f8-a1e2e4c7a598"
	Sep 04 22:07:26 default-k8s-diff-port-601847 kubelet[823]: E0904 22:07:26.883246     823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-22q8g" podUID="6e7da225-bc40-402a-aacd-963133c9e211"
	Sep 04 22:07:30 default-k8s-diff-port-601847 kubelet[823]: E0904 22:07:30.883288     823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-k7j78" podUID="4487a876-7f24-447b-afec-505bc3d62dbb"
	Sep 04 22:07:34 default-k8s-diff-port-601847 kubelet[823]: E0904 22:07:34.033891     823 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757023654033635178  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 22:07:34 default-k8s-diff-port-601847 kubelet[823]: E0904 22:07:34.033926     823 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757023654033635178  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 22:07:39 default-k8s-diff-port-601847 kubelet[823]: I0904 22:07:39.882056     823 scope.go:117] "RemoveContainer" containerID="f7e4f5e245eb6419d0cbf926617bb20b6f92d05d3e1e92b2d7dde278ab2e7f1d"
	Sep 04 22:07:39 default-k8s-diff-port-601847 kubelet[823]: E0904 22:07:39.882283     823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fz65t_kubernetes-dashboard(32237f00-2ebe-43ce-89f8-a1e2e4c7a598)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fz65t" podUID="32237f00-2ebe-43ce-89f8-a1e2e4c7a598"
	Sep 04 22:07:40 default-k8s-diff-port-601847 kubelet[823]: E0904 22:07:40.883503     823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-22q8g" podUID="6e7da225-bc40-402a-aacd-963133c9e211"
	
	
	==> storage-provisioner [19865017dd69447d35978e2ded9b2720c45b59ad417ef057f3bbf96a2ddd64c1] <==
	W0904 22:07:16.773195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:07:18.775774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:07:18.779647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:07:20.782066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:07:20.785665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:07:22.788320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:07:22.792606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:07:24.795048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:07:24.798964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:07:26.801854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:07:26.805516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:07:28.808602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:07:28.812209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:07:30.815107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:07:30.819771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:07:32.822743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:07:32.826296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:07:34.829116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:07:34.833860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:07:36.836525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:07:36.840121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:07:38.842951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:07:38.847588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:07:40.851058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:07:40.855112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [9928e6b6e53c41714e74f689852a89559605d108c2329637e53f78886041722d] <==
	I0904 21:57:59.065783       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0904 21:58:29.068218       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-601847 -n default-k8s-diff-port-601847
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-601847 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-k7j78 kubernetes-dashboard-855c9754f9-22q8g
helpers_test.go:282: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-601847 describe pod metrics-server-746fcd58dc-k7j78 kubernetes-dashboard-855c9754f9-22q8g
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-601847 describe pod metrics-server-746fcd58dc-k7j78 kubernetes-dashboard-855c9754f9-22q8g: exit status 1 (54.042178ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-k7j78" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-22q8g" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context default-k8s-diff-port-601847 describe pod metrics-server-746fcd58dc-k7j78 kubernetes-dashboard-855c9754f9-22q8g: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-22q8g" [6e7da225-bc40-402a-aacd-963133c9e211] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0904 22:07:57.829632  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/enable-default-cni-364928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:08:07.021655  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:08:12.066074  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:08:16.584848  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/auto-364928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:08:29.453518  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/kindnet-364928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:08:44.285871  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/auto-364928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:08:49.947425  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/flannel-364928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:08:57.154898  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/kindnet-364928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:09:25.637429  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/old-k8s-version-001115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:09:28.943286  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:09:35.740912  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/custom-flannel-364928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:10:02.005728  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:10:03.442400  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/custom-flannel-364928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:10:13.969501  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/enable-default-cni-364928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:10:35.556209  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/no-preload-093695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:10:41.671146  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/enable-default-cni-364928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:11:06.086959  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/flannel-364928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:11:33.789643  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/flannel-364928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:11:45.085153  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:12:12.784632  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:13:12.065666  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:13:16.584697  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/auto-364928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:13:29.453427  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/kindnet-364928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-601847 -n default-k8s-diff-port-601847
start_stop_delete_test.go:285: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-09-04 22:16:42.082104966 +0000 UTC m=+4878.925707529
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-601847 describe po kubernetes-dashboard-855c9754f9-22q8g -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context default-k8s-diff-port-601847 describe po kubernetes-dashboard-855c9754f9-22q8g -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-22q8g
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             default-k8s-diff-port-601847/192.168.76.2
Start Time:       Thu, 04 Sep 2025 21:58:03 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jkms5 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-jkms5:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-22q8g to default-k8s-diff-port-601847
Normal   Pulling    12m (x5 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     11m (x4 over 18m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     11m (x5 over 18m)     kubelet            Error: ErrImagePull
Warning  Failed     7m57s (x24 over 18m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    3m29s (x45 over 18m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     2m47s (x3 over 15m)   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-601847 logs kubernetes-dashboard-855c9754f9-22q8g -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-601847 logs kubernetes-dashboard-855c9754f9-22q8g -n kubernetes-dashboard: exit status 1 (64.288229ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-22q8g" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context default-k8s-diff-port-601847 logs kubernetes-dashboard-855c9754f9-22q8g -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-601847 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-601847
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-601847:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "07ce3aad696c6b9eaefbc8db9e1f9b916b577fe41fa5b2f14c0437ec2435ebf5",
	        "Created": "2025-09-04T21:56:17.062908981Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 689541,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-04T21:57:47.631588991Z",
	            "FinishedAt": "2025-09-04T21:57:46.041219709Z"
	        },
	        "Image": "sha256:26724cb1013292a869503ea8c35fa5a932e6ec3e4e85e318411373ae6a24478b",
	        "ResolvConfPath": "/var/lib/docker/containers/07ce3aad696c6b9eaefbc8db9e1f9b916b577fe41fa5b2f14c0437ec2435ebf5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/07ce3aad696c6b9eaefbc8db9e1f9b916b577fe41fa5b2f14c0437ec2435ebf5/hostname",
	        "HostsPath": "/var/lib/docker/containers/07ce3aad696c6b9eaefbc8db9e1f9b916b577fe41fa5b2f14c0437ec2435ebf5/hosts",
	        "LogPath": "/var/lib/docker/containers/07ce3aad696c6b9eaefbc8db9e1f9b916b577fe41fa5b2f14c0437ec2435ebf5/07ce3aad696c6b9eaefbc8db9e1f9b916b577fe41fa5b2f14c0437ec2435ebf5-json.log",
	        "Name": "/default-k8s-diff-port-601847",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-601847:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-601847",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "07ce3aad696c6b9eaefbc8db9e1f9b916b577fe41fa5b2f14c0437ec2435ebf5",
	                "LowerDir": "/var/lib/docker/overlay2/573fd9140cabb79a06017c368f52bd18455477eab6e266ebe7ef78d7685daff7-init/diff:/var/lib/docker/overlay2/fcc29a201369e5fb579bfafdafe90606e5b86bd2f4b52beb7f6a97f7e955214d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/573fd9140cabb79a06017c368f52bd18455477eab6e266ebe7ef78d7685daff7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/573fd9140cabb79a06017c368f52bd18455477eab6e266ebe7ef78d7685daff7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/573fd9140cabb79a06017c368f52bd18455477eab6e266ebe7ef78d7685daff7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-601847",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-601847/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-601847",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-601847",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-601847",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f616af140bcfdb4e2e508dd3522c97ac6e046eaba3b2aa145fdf514a9ded67dc",
	            "SandboxKey": "/var/run/docker/netns/f616af140bcf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33485"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33486"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33489"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33487"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33488"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-601847": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:da:fb:f1:a2:21",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bf07459402385f2fa05662d4e68f7943fbbac7763a63a2d6af5fc7bff0f17d6a",
	                    "EndpointID": "97a1d39ef2e905864d02978bb17fa696e389b4392166de25fb70e1fcba7c6911",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-601847",
	                        "07ce3aad696c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-601847 -n default-k8s-diff-port-601847
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-601847 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-601847 logs -n 25: (1.119717835s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────┬───────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                      │    PROFILE    │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────┼───────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p calico-364928 sudo iptables -t nat -L -n -v                                 │ calico-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:14 UTC │ 04 Sep 25 22:14 UTC │
	│ ssh     │ -p calico-364928 sudo systemctl status kubelet --all --full --no-pager         │ calico-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:14 UTC │ 04 Sep 25 22:14 UTC │
	│ ssh     │ -p calico-364928 sudo systemctl cat kubelet --no-pager                         │ calico-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:14 UTC │ 04 Sep 25 22:14 UTC │
	│ ssh     │ -p calico-364928 sudo journalctl -xeu kubelet --all --full --no-pager          │ calico-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:14 UTC │ 04 Sep 25 22:14 UTC │
	│ ssh     │ -p calico-364928 sudo cat /etc/kubernetes/kubelet.conf                         │ calico-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:14 UTC │ 04 Sep 25 22:14 UTC │
	│ ssh     │ -p calico-364928 sudo cat /var/lib/kubelet/config.yaml                         │ calico-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:14 UTC │ 04 Sep 25 22:14 UTC │
	│ ssh     │ -p calico-364928 sudo systemctl status docker --all --full --no-pager          │ calico-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:14 UTC │                     │
	│ ssh     │ -p calico-364928 sudo systemctl cat docker --no-pager                          │ calico-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:14 UTC │ 04 Sep 25 22:14 UTC │
	│ ssh     │ -p calico-364928 sudo cat /etc/docker/daemon.json                              │ calico-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:14 UTC │                     │
	│ ssh     │ -p calico-364928 sudo docker system info                                       │ calico-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:14 UTC │                     │
	│ ssh     │ -p calico-364928 sudo systemctl status cri-docker --all --full --no-pager      │ calico-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:14 UTC │                     │
	│ ssh     │ -p calico-364928 sudo systemctl cat cri-docker --no-pager                      │ calico-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:14 UTC │ 04 Sep 25 22:14 UTC │
	│ ssh     │ -p calico-364928 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ calico-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:14 UTC │                     │
	│ ssh     │ -p calico-364928 sudo cat /usr/lib/systemd/system/cri-docker.service           │ calico-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:14 UTC │ 04 Sep 25 22:14 UTC │
	│ ssh     │ -p calico-364928 sudo cri-dockerd --version                                    │ calico-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:14 UTC │ 04 Sep 25 22:14 UTC │
	│ ssh     │ -p calico-364928 sudo systemctl status containerd --all --full --no-pager      │ calico-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:14 UTC │                     │
	│ ssh     │ -p calico-364928 sudo systemctl cat containerd --no-pager                      │ calico-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:14 UTC │ 04 Sep 25 22:14 UTC │
	│ ssh     │ -p calico-364928 sudo cat /lib/systemd/system/containerd.service               │ calico-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:14 UTC │ 04 Sep 25 22:14 UTC │
	│ ssh     │ -p calico-364928 sudo cat /etc/containerd/config.toml                          │ calico-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:14 UTC │ 04 Sep 25 22:14 UTC │
	│ ssh     │ -p calico-364928 sudo containerd config dump                                   │ calico-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:14 UTC │ 04 Sep 25 22:14 UTC │
	│ ssh     │ -p calico-364928 sudo systemctl status crio --all --full --no-pager            │ calico-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:14 UTC │ 04 Sep 25 22:14 UTC │
	│ ssh     │ -p calico-364928 sudo systemctl cat crio --no-pager                            │ calico-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:14 UTC │ 04 Sep 25 22:14 UTC │
	│ ssh     │ -p calico-364928 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ calico-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:14 UTC │ 04 Sep 25 22:14 UTC │
	│ ssh     │ -p calico-364928 sudo crio config                                              │ calico-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:14 UTC │ 04 Sep 25 22:14 UTC │
	│ delete  │ -p calico-364928                                                               │ calico-364928 │ jenkins │ v1.36.0 │ 04 Sep 25 22:14 UTC │ 04 Sep 25 22:14 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────┴───────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 22:00:42
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 22:00:42.061711  725643 out.go:360] Setting OutFile to fd 1 ...
	I0904 22:00:42.061808  725643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 22:00:42.061812  725643 out.go:374] Setting ErrFile to fd 2...
	I0904 22:00:42.061816  725643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 22:00:42.062020  725643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-384635/.minikube/bin
	I0904 22:00:42.062596  725643 out.go:368] Setting JSON to false
	I0904 22:00:42.063905  725643 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":13391,"bootTime":1757009851,"procs":295,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 22:00:42.064015  725643 start.go:140] virtualization: kvm guest
	I0904 22:00:42.066135  725643 out.go:179] * [bridge-364928] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 22:00:42.067371  725643 out.go:179]   - MINIKUBE_LOCATION=21490
	I0904 22:00:42.067374  725643 notify.go:220] Checking for updates...
	I0904 22:00:42.069971  725643 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 22:00:42.071190  725643 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21490-384635/kubeconfig
	I0904 22:00:42.072493  725643 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-384635/.minikube
	I0904 22:00:42.073696  725643 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 22:00:42.074894  725643 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 22:00:42.076414  725643 config.go:182] Loaded profile config "calico-364928": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 22:00:42.076525  725643 config.go:182] Loaded profile config "default-k8s-diff-port-601847": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 22:00:42.076621  725643 config.go:182] Loaded profile config "flannel-364928": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 22:00:42.076780  725643 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 22:00:42.102652  725643 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 22:00:42.102759  725643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 22:00:42.150839  725643 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-09-04 22:00:42.141763804 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 22:00:42.150935  725643 docker.go:318] overlay module found
	I0904 22:00:42.152627  725643 out.go:179] * Using the docker driver based on user configuration
	I0904 22:00:42.153765  725643 start.go:304] selected driver: docker
	I0904 22:00:42.153783  725643 start.go:918] validating driver "docker" against <nil>
	I0904 22:00:42.153795  725643 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 22:00:42.154647  725643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 22:00:42.205855  725643 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-09-04 22:00:42.19598252 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 22:00:42.206031  725643 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0904 22:00:42.206240  725643 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 22:00:42.208042  725643 out.go:179] * Using Docker driver with root privileges
	I0904 22:00:42.209370  725643 cni.go:84] Creating CNI manager for "bridge"
	I0904 22:00:42.209392  725643 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0904 22:00:42.209474  725643 start.go:348] cluster config:
	{Name:bridge-364928 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-364928 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I0904 22:00:42.210894  725643 out.go:179] * Starting "bridge-364928" primary control-plane node in "bridge-364928" cluster
	I0904 22:00:42.212036  725643 cache.go:123] Beginning downloading kic base image for docker with crio
	I0904 22:00:42.213197  725643 out.go:179] * Pulling base image v0.0.47-1756116447-21413 ...
	I0904 22:00:42.214197  725643 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 22:00:42.214239  725643 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21490-384635/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0904 22:00:42.214254  725643 cache.go:58] Caching tarball of preloaded images
	I0904 22:00:42.214284  725643 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 in local docker daemon
	I0904 22:00:42.214336  725643 preload.go:172] Found /home/jenkins/minikube-integration/21490-384635/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0904 22:00:42.214347  725643 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0904 22:00:42.214433  725643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/config.json ...
	I0904 22:00:42.214451  725643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/config.json: {Name:mk330067f00b63e01efe897148f5319c2e1cf180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 22:00:42.234667  725643 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 in local docker daemon, skipping pull
	I0904 22:00:42.234693  725643 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 exists in daemon, skipping load
	I0904 22:00:42.234714  725643 cache.go:232] Successfully downloaded all kic artifacts
	I0904 22:00:42.234743  725643 start.go:360] acquireMachinesLock for bridge-364928: {Name:mk2c01c5b822bc2f5bd831325c7a96dfddb208a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 22:00:42.234861  725643 start.go:364] duration metric: took 88.836µs to acquireMachinesLock for "bridge-364928"
	I0904 22:00:42.234889  725643 start.go:93] Provisioning new machine with config: &{Name:bridge-364928 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-364928 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 22:00:42.234993  725643 start.go:125] createHost starting for "" (driver="docker")
	W0904 22:00:39.937960  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:00:42.437821  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:00:39.959283  718003 node_ready.go:57] node "flannel-364928" has "Ready":"False" status (will retry)
	W0904 22:00:42.456319  718003 node_ready.go:57] node "flannel-364928" has "Ready":"False" status (will retry)
	I0904 22:00:42.237290  725643 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0904 22:00:42.237548  725643 start.go:159] libmachine.API.Create for "bridge-364928" (driver="docker")
	I0904 22:00:42.237584  725643 client.go:168] LocalClient.Create starting
	I0904 22:00:42.237648  725643 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem
	I0904 22:00:42.237694  725643 main.go:141] libmachine: Decoding PEM data...
	I0904 22:00:42.237717  725643 main.go:141] libmachine: Parsing certificate...
	I0904 22:00:42.237798  725643 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21490-384635/.minikube/certs/cert.pem
	I0904 22:00:42.237832  725643 main.go:141] libmachine: Decoding PEM data...
	I0904 22:00:42.237850  725643 main.go:141] libmachine: Parsing certificate...
	I0904 22:00:42.238177  725643 cli_runner.go:164] Run: docker network inspect bridge-364928 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0904 22:00:42.255272  725643 cli_runner.go:211] docker network inspect bridge-364928 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0904 22:00:42.255342  725643 network_create.go:284] running [docker network inspect bridge-364928] to gather additional debugging logs...
	I0904 22:00:42.255366  725643 cli_runner.go:164] Run: docker network inspect bridge-364928
	W0904 22:00:42.272070  725643 cli_runner.go:211] docker network inspect bridge-364928 returned with exit code 1
	I0904 22:00:42.272101  725643 network_create.go:287] error running [docker network inspect bridge-364928]: docker network inspect bridge-364928: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network bridge-364928 not found
	I0904 22:00:42.272115  725643 network_create.go:289] output of [docker network inspect bridge-364928]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network bridge-364928 not found
	
	** /stderr **
	I0904 22:00:42.272236  725643 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0904 22:00:42.289985  725643 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5502e71d097a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:ef:c1:96:ed:36} reservation:<nil>}
	I0904 22:00:42.290961  725643 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e63f0d636ac7 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:63:34:a9:e4:57} reservation:<nil>}
	I0904 22:00:42.291514  725643 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-66f991fb509e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:12:87:15:f5:6e:d8} reservation:<nil>}
	I0904 22:00:42.292170  725643 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-bf0745940238 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:b6:9d:2a:98:20:f7} reservation:<nil>}
	I0904 22:00:42.292984  725643 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-9ad9d5939106 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:f6:07:97:7a:6d:cd} reservation:<nil>}
	I0904 22:00:42.293921  725643 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e75bc0}
	I0904 22:00:42.293948  725643 network_create.go:124] attempt to create docker network bridge-364928 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0904 22:00:42.293998  725643 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-364928 bridge-364928
	I0904 22:00:42.348216  725643 network_create.go:108] docker network bridge-364928 192.168.94.0/24 created
	I0904 22:00:42.348249  725643 kic.go:121] calculated static IP "192.168.94.2" for the "bridge-364928" container
	I0904 22:00:42.348334  725643 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0904 22:00:42.365412  725643 cli_runner.go:164] Run: docker volume create bridge-364928 --label name.minikube.sigs.k8s.io=bridge-364928 --label created_by.minikube.sigs.k8s.io=true
	I0904 22:00:42.382276  725643 oci.go:103] Successfully created a docker volume bridge-364928
	I0904 22:00:42.382353  725643 cli_runner.go:164] Run: docker run --rm --name bridge-364928-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-364928 --entrypoint /usr/bin/test -v bridge-364928:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 -d /var/lib
	I0904 22:00:42.821548  725643 oci.go:107] Successfully prepared a docker volume bridge-364928
	I0904 22:00:42.821620  725643 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 22:00:42.821653  725643 kic.go:194] Starting extracting preloaded images to volume ...
	I0904 22:00:42.821718  725643 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21490-384635/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v bridge-364928:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 -I lz4 -xf /preloaded.tar -C /extractDir
	W0904 22:00:44.437988  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:00:46.937817  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:00:44.956079  718003 node_ready.go:57] node "flannel-364928" has "Ready":"False" status (will retry)
	W0904 22:00:46.956390  718003 node_ready.go:57] node "flannel-364928" has "Ready":"False" status (will retry)
	W0904 22:00:48.956513  718003 node_ready.go:57] node "flannel-364928" has "Ready":"False" status (will retry)
	I0904 22:00:47.598360  725643 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21490-384635/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v bridge-364928:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 -I lz4 -xf /preloaded.tar -C /extractDir: (4.776586901s)
	I0904 22:00:47.598400  725643 kic.go:203] duration metric: took 4.776741896s to extract preloaded images to volume ...
	W0904 22:00:47.598558  725643 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0904 22:00:47.598682  725643 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0904 22:00:47.654953  725643 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname bridge-364928 --name bridge-364928 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-364928 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=bridge-364928 --network bridge-364928 --ip 192.168.94.2 --volume bridge-364928:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9
	I0904 22:00:47.961081  725643 cli_runner.go:164] Run: docker container inspect bridge-364928 --format={{.State.Running}}
	I0904 22:00:47.983935  725643 cli_runner.go:164] Run: docker container inspect bridge-364928 --format={{.State.Status}}
	I0904 22:00:48.006544  725643 cli_runner.go:164] Run: docker exec bridge-364928 stat /var/lib/dpkg/alternatives/iptables
	I0904 22:00:48.055370  725643 oci.go:144] the created container "bridge-364928" has a running status.
	I0904 22:00:48.055409  725643 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21490-384635/.minikube/machines/bridge-364928/id_rsa...
	I0904 22:00:48.938754  725643 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21490-384635/.minikube/machines/bridge-364928/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0904 22:00:48.959569  725643 cli_runner.go:164] Run: docker container inspect bridge-364928 --format={{.State.Status}}
	I0904 22:00:48.976653  725643 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0904 22:00:48.976678  725643 kic_runner.go:114] Args: [docker exec --privileged bridge-364928 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0904 22:00:49.017120  725643 cli_runner.go:164] Run: docker container inspect bridge-364928 --format={{.State.Status}}
	I0904 22:00:49.033524  725643 machine.go:93] provisionDockerMachine start ...
	I0904 22:00:49.033619  725643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-364928
	I0904 22:00:49.051447  725643 main.go:141] libmachine: Using SSH client type: native
	I0904 22:00:49.051790  725643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33510 <nil> <nil>}
	I0904 22:00:49.051814  725643 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 22:00:49.164290  725643 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-364928
	
	I0904 22:00:49.164322  725643 ubuntu.go:182] provisioning hostname "bridge-364928"
	I0904 22:00:49.164378  725643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-364928
	I0904 22:00:49.182571  725643 main.go:141] libmachine: Using SSH client type: native
	I0904 22:00:49.182793  725643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33510 <nil> <nil>}
	I0904 22:00:49.182808  725643 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-364928 && echo "bridge-364928" | sudo tee /etc/hostname
	I0904 22:00:49.308328  725643 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-364928
	
	I0904 22:00:49.308406  725643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-364928
	I0904 22:00:49.325968  725643 main.go:141] libmachine: Using SSH client type: native
	I0904 22:00:49.326207  725643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33510 <nil> <nil>}
	I0904 22:00:49.326235  725643 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-364928' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-364928/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-364928' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 22:00:49.441060  725643 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 22:00:49.441089  725643 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21490-384635/.minikube CaCertPath:/home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21490-384635/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21490-384635/.minikube}
	I0904 22:00:49.441143  725643 ubuntu.go:190] setting up certificates
	I0904 22:00:49.441164  725643 provision.go:84] configureAuth start
	I0904 22:00:49.441220  725643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-364928
	I0904 22:00:49.458619  725643 provision.go:143] copyHostCerts
	I0904 22:00:49.458681  725643 exec_runner.go:144] found /home/jenkins/minikube-integration/21490-384635/.minikube/ca.pem, removing ...
	I0904 22:00:49.458695  725643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21490-384635/.minikube/ca.pem
	I0904 22:00:49.458767  725643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21490-384635/.minikube/ca.pem (1078 bytes)
	I0904 22:00:49.458865  725643 exec_runner.go:144] found /home/jenkins/minikube-integration/21490-384635/.minikube/cert.pem, removing ...
	I0904 22:00:49.458877  725643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21490-384635/.minikube/cert.pem
	I0904 22:00:49.458915  725643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21490-384635/.minikube/cert.pem (1123 bytes)
	I0904 22:00:49.458985  725643 exec_runner.go:144] found /home/jenkins/minikube-integration/21490-384635/.minikube/key.pem, removing ...
	I0904 22:00:49.458995  725643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21490-384635/.minikube/key.pem
	I0904 22:00:49.459028  725643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21490-384635/.minikube/key.pem (1675 bytes)
	I0904 22:00:49.459092  725643 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21490-384635/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca-key.pem org=jenkins.bridge-364928 san=[127.0.0.1 192.168.94.2 bridge-364928 localhost minikube]
	I0904 22:00:49.825343  725643 provision.go:177] copyRemoteCerts
	I0904 22:00:49.825399  725643 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 22:00:49.825447  725643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-364928
	I0904 22:00:49.843070  725643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/bridge-364928/id_rsa Username:docker}
	I0904 22:00:49.929288  725643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0904 22:00:49.952111  725643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0904 22:00:49.974969  725643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0904 22:00:49.997241  725643 provision.go:87] duration metric: took 556.058591ms to configureAuth
	I0904 22:00:49.997273  725643 ubuntu.go:206] setting minikube options for container-runtime
	I0904 22:00:49.997427  725643 config.go:182] Loaded profile config "bridge-364928": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 22:00:49.997529  725643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-364928
	I0904 22:00:50.015408  725643 main.go:141] libmachine: Using SSH client type: native
	I0904 22:00:50.015622  725643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33510 <nil> <nil>}
	I0904 22:00:50.015639  725643 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0904 22:00:50.220444  725643 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0904 22:00:50.220474  725643 machine.go:96] duration metric: took 1.186921603s to provisionDockerMachine
	I0904 22:00:50.220487  725643 client.go:171] duration metric: took 7.982895636s to LocalClient.Create
	I0904 22:00:50.220512  725643 start.go:167] duration metric: took 7.982966048s to libmachine.API.Create "bridge-364928"
	I0904 22:00:50.220524  725643 start.go:293] postStartSetup for "bridge-364928" (driver="docker")
	I0904 22:00:50.220536  725643 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 22:00:50.220599  725643 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 22:00:50.220651  725643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-364928
	I0904 22:00:50.241803  725643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/bridge-364928/id_rsa Username:docker}
	I0904 22:00:50.334134  725643 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 22:00:50.337257  725643 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0904 22:00:50.337291  725643 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0904 22:00:50.337298  725643 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0904 22:00:50.337305  725643 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0904 22:00:50.337325  725643 filesync.go:126] Scanning /home/jenkins/minikube-integration/21490-384635/.minikube/addons for local assets ...
	I0904 22:00:50.337372  725643 filesync.go:126] Scanning /home/jenkins/minikube-integration/21490-384635/.minikube/files for local assets ...
	I0904 22:00:50.337455  725643 filesync.go:149] local asset: /home/jenkins/minikube-integration/21490-384635/.minikube/files/etc/ssl/certs/3883602.pem -> 3883602.pem in /etc/ssl/certs
	I0904 22:00:50.337544  725643 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0904 22:00:50.345491  725643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/files/etc/ssl/certs/3883602.pem --> /etc/ssl/certs/3883602.pem (1708 bytes)
	I0904 22:00:50.371533  725643 start.go:296] duration metric: took 150.993143ms for postStartSetup
	I0904 22:00:50.371870  725643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-364928
	I0904 22:00:50.393981  725643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/config.json ...
	I0904 22:00:50.394238  725643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 22:00:50.394299  725643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-364928
	I0904 22:00:50.412484  725643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/bridge-364928/id_rsa Username:docker}
	I0904 22:00:50.497713  725643 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0904 22:00:50.501722  725643 start.go:128] duration metric: took 8.266714891s to createHost
	I0904 22:00:50.501744  725643 start.go:83] releasing machines lock for "bridge-364928", held for 8.266868723s
	I0904 22:00:50.501798  725643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-364928
	I0904 22:00:50.519244  725643 ssh_runner.go:195] Run: cat /version.json
	I0904 22:00:50.519277  725643 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 22:00:50.519295  725643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-364928
	I0904 22:00:50.519345  725643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-364928
	I0904 22:00:50.536666  725643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/bridge-364928/id_rsa Username:docker}
	I0904 22:00:50.537512  725643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/bridge-364928/id_rsa Username:docker}
	I0904 22:00:50.694681  725643 ssh_runner.go:195] Run: systemctl --version
	I0904 22:00:50.699410  725643 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0904 22:00:50.839250  725643 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0904 22:00:50.844270  725643 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 22:00:50.864344  725643 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0904 22:00:50.864415  725643 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 22:00:50.896347  725643 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0904 22:00:50.896373  725643 start.go:495] detecting cgroup driver to use...
	I0904 22:00:50.896404  725643 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0904 22:00:50.896478  725643 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 22:00:50.911356  725643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 22:00:50.921621  725643 docker.go:218] disabling cri-docker service (if available) ...
	I0904 22:00:50.921669  725643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0904 22:00:50.933458  725643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0904 22:00:50.946984  725643 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0904 22:00:51.029569  725643 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0904 22:00:51.108914  725643 docker.go:234] disabling docker service ...
	I0904 22:00:51.108986  725643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0904 22:00:51.126466  725643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0904 22:00:51.136964  725643 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0904 22:00:51.210893  725643 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0904 22:00:51.300707  725643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 22:00:51.311330  725643 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 22:00:51.326012  725643 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0904 22:00:51.326070  725643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 22:00:51.334794  725643 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0904 22:00:51.334848  725643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 22:00:51.343610  725643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 22:00:51.352356  725643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 22:00:51.362048  725643 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 22:00:51.371363  725643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 22:00:51.383275  725643 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 22:00:51.398675  725643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 22:00:51.407686  725643 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 22:00:51.415159  725643 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 22:00:51.422772  725643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 22:00:51.502548  725643 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0904 22:00:51.620440  725643 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0904 22:00:51.620505  725643 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0904 22:00:51.623952  725643 start.go:563] Will wait 60s for crictl version
	I0904 22:00:51.623999  725643 ssh_runner.go:195] Run: which crictl
	I0904 22:00:51.627189  725643 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 22:00:51.662090  725643 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0904 22:00:51.662171  725643 ssh_runner.go:195] Run: crio --version
	I0904 22:00:51.697178  725643 ssh_runner.go:195] Run: crio --version
	I0904 22:00:51.731546  725643 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0904 22:00:51.732569  725643 cli_runner.go:164] Run: docker network inspect bridge-364928 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0904 22:00:51.748597  725643 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0904 22:00:51.752505  725643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 22:00:51.763994  725643 kubeadm.go:875] updating cluster {Name:bridge-364928 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-364928 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0904 22:00:51.764113  725643 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 22:00:51.764171  725643 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 22:00:51.833376  725643 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 22:00:51.833400  725643 crio.go:433] Images already preloaded, skipping extraction
	I0904 22:00:51.833456  725643 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 22:00:51.866515  725643 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 22:00:51.866536  725643 cache_images.go:85] Images are preloaded, skipping loading
	I0904 22:00:51.866544  725643 kubeadm.go:926] updating node { 192.168.94.2 8443 v1.34.0 crio true true} ...
	I0904 22:00:51.866627  725643 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=bridge-364928 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:bridge-364928 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0904 22:00:51.866688  725643 ssh_runner.go:195] Run: crio config
	I0904 22:00:51.910430  725643 cni.go:84] Creating CNI manager for "bridge"
	I0904 22:00:51.910451  725643 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0904 22:00:51.910489  725643 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-364928 NodeName:bridge-364928 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0904 22:00:51.910619  725643 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-364928"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0904 22:00:51.910673  725643 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0904 22:00:51.919287  725643 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 22:00:51.919344  725643 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0904 22:00:51.927126  725643 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0904 22:00:51.943313  725643 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 22:00:51.959657  725643 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I0904 22:00:51.976125  725643 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0904 22:00:51.979320  725643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 22:00:51.989293  725643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W0904 22:00:49.438116  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:00:51.938028  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	I0904 22:00:50.956390  718003 node_ready.go:49] node "flannel-364928" is "Ready"
	I0904 22:00:50.956416  718003 node_ready.go:38] duration metric: took 20.003181245s for node "flannel-364928" to be "Ready" ...
	I0904 22:00:50.956430  718003 api_server.go:52] waiting for apiserver process to appear ...
	I0904 22:00:50.956473  718003 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 22:00:50.967984  718003 api_server.go:72] duration metric: took 20.974991113s to wait for apiserver process to appear ...
	I0904 22:00:50.968016  718003 api_server.go:88] waiting for apiserver healthz status ...
	I0904 22:00:50.968043  718003 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0904 22:00:50.972466  718003 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0904 22:00:50.973547  718003 api_server.go:141] control plane version: v1.34.0
	I0904 22:00:50.973571  718003 api_server.go:131] duration metric: took 5.546507ms to wait for apiserver health ...
	I0904 22:00:50.973581  718003 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 22:00:50.977344  718003 system_pods.go:59] 7 kube-system pods found
	I0904 22:00:50.977374  718003 system_pods.go:61] "coredns-66bc5c9577-xsdj5" [c61291ea-1a68-4588-9c21-4d2fddd5841c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:00:50.977380  718003 system_pods.go:61] "etcd-flannel-364928" [fbd939e4-dd24-4ebe-9a25-f4cecc315787] Running
	I0904 22:00:50.977385  718003 system_pods.go:61] "kube-apiserver-flannel-364928" [e170c8ad-8f30-49ee-909a-2c034bb8cb84] Running
	I0904 22:00:50.977389  718003 system_pods.go:61] "kube-controller-manager-flannel-364928" [e0b88933-1bb3-4b5d-839f-c994622fe176] Running
	I0904 22:00:50.977393  718003 system_pods.go:61] "kube-proxy-6gcgv" [98e7c2fb-9c8f-46b2-8fe9-00fb5f691e35] Running
	I0904 22:00:50.977397  718003 system_pods.go:61] "kube-scheduler-flannel-364928" [df038abc-0fe9-4b20-b9d5-b6d07cc5d63e] Running
	I0904 22:00:50.977402  718003 system_pods.go:61] "storage-provisioner" [82eb6b3c-f3a8-4bc8-af68-10771f976b77] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 22:00:50.977413  718003 system_pods.go:74] duration metric: took 3.825887ms to wait for pod list to return data ...
	I0904 22:00:50.977421  718003 default_sa.go:34] waiting for default service account to be created ...
	I0904 22:00:50.979777  718003 default_sa.go:45] found service account: "default"
	I0904 22:00:50.979795  718003 default_sa.go:55] duration metric: took 2.365481ms for default service account to be created ...
	I0904 22:00:50.979802  718003 system_pods.go:116] waiting for k8s-apps to be running ...
	I0904 22:00:50.982582  718003 system_pods.go:86] 7 kube-system pods found
	I0904 22:00:50.982615  718003 system_pods.go:89] "coredns-66bc5c9577-xsdj5" [c61291ea-1a68-4588-9c21-4d2fddd5841c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:00:50.982623  718003 system_pods.go:89] "etcd-flannel-364928" [fbd939e4-dd24-4ebe-9a25-f4cecc315787] Running
	I0904 22:00:50.982637  718003 system_pods.go:89] "kube-apiserver-flannel-364928" [e170c8ad-8f30-49ee-909a-2c034bb8cb84] Running
	I0904 22:00:50.982643  718003 system_pods.go:89] "kube-controller-manager-flannel-364928" [e0b88933-1bb3-4b5d-839f-c994622fe176] Running
	I0904 22:00:50.982651  718003 system_pods.go:89] "kube-proxy-6gcgv" [98e7c2fb-9c8f-46b2-8fe9-00fb5f691e35] Running
	I0904 22:00:50.982656  718003 system_pods.go:89] "kube-scheduler-flannel-364928" [df038abc-0fe9-4b20-b9d5-b6d07cc5d63e] Running
	I0904 22:00:50.982664  718003 system_pods.go:89] "storage-provisioner" [82eb6b3c-f3a8-4bc8-af68-10771f976b77] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 22:00:50.982695  718003 retry.go:31] will retry after 272.663724ms: missing components: kube-dns
	I0904 22:00:51.259377  718003 system_pods.go:86] 7 kube-system pods found
	I0904 22:00:51.259411  718003 system_pods.go:89] "coredns-66bc5c9577-xsdj5" [c61291ea-1a68-4588-9c21-4d2fddd5841c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:00:51.259417  718003 system_pods.go:89] "etcd-flannel-364928" [fbd939e4-dd24-4ebe-9a25-f4cecc315787] Running
	I0904 22:00:51.259422  718003 system_pods.go:89] "kube-apiserver-flannel-364928" [e170c8ad-8f30-49ee-909a-2c034bb8cb84] Running
	I0904 22:00:51.259428  718003 system_pods.go:89] "kube-controller-manager-flannel-364928" [e0b88933-1bb3-4b5d-839f-c994622fe176] Running
	I0904 22:00:51.259434  718003 system_pods.go:89] "kube-proxy-6gcgv" [98e7c2fb-9c8f-46b2-8fe9-00fb5f691e35] Running
	I0904 22:00:51.259438  718003 system_pods.go:89] "kube-scheduler-flannel-364928" [df038abc-0fe9-4b20-b9d5-b6d07cc5d63e] Running
	I0904 22:00:51.259447  718003 system_pods.go:89] "storage-provisioner" [82eb6b3c-f3a8-4bc8-af68-10771f976b77] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 22:00:51.259471  718003 retry.go:31] will retry after 344.99828ms: missing components: kube-dns
	I0904 22:00:51.608603  718003 system_pods.go:86] 7 kube-system pods found
	I0904 22:00:51.608634  718003 system_pods.go:89] "coredns-66bc5c9577-xsdj5" [c61291ea-1a68-4588-9c21-4d2fddd5841c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:00:51.608640  718003 system_pods.go:89] "etcd-flannel-364928" [fbd939e4-dd24-4ebe-9a25-f4cecc315787] Running
	I0904 22:00:51.608645  718003 system_pods.go:89] "kube-apiserver-flannel-364928" [e170c8ad-8f30-49ee-909a-2c034bb8cb84] Running
	I0904 22:00:51.608650  718003 system_pods.go:89] "kube-controller-manager-flannel-364928" [e0b88933-1bb3-4b5d-839f-c994622fe176] Running
	I0904 22:00:51.608655  718003 system_pods.go:89] "kube-proxy-6gcgv" [98e7c2fb-9c8f-46b2-8fe9-00fb5f691e35] Running
	I0904 22:00:51.608660  718003 system_pods.go:89] "kube-scheduler-flannel-364928" [df038abc-0fe9-4b20-b9d5-b6d07cc5d63e] Running
	I0904 22:00:51.608667  718003 system_pods.go:89] "storage-provisioner" [82eb6b3c-f3a8-4bc8-af68-10771f976b77] Running
	I0904 22:00:51.608689  718003 retry.go:31] will retry after 294.96852ms: missing components: kube-dns
	I0904 22:00:51.907526  718003 system_pods.go:86] 7 kube-system pods found
	I0904 22:00:51.907565  718003 system_pods.go:89] "coredns-66bc5c9577-xsdj5" [c61291ea-1a68-4588-9c21-4d2fddd5841c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:00:51.907574  718003 system_pods.go:89] "etcd-flannel-364928" [fbd939e4-dd24-4ebe-9a25-f4cecc315787] Running
	I0904 22:00:51.907583  718003 system_pods.go:89] "kube-apiserver-flannel-364928" [e170c8ad-8f30-49ee-909a-2c034bb8cb84] Running
	I0904 22:00:51.907590  718003 system_pods.go:89] "kube-controller-manager-flannel-364928" [e0b88933-1bb3-4b5d-839f-c994622fe176] Running
	I0904 22:00:51.907604  718003 system_pods.go:89] "kube-proxy-6gcgv" [98e7c2fb-9c8f-46b2-8fe9-00fb5f691e35] Running
	I0904 22:00:51.907613  718003 system_pods.go:89] "kube-scheduler-flannel-364928" [df038abc-0fe9-4b20-b9d5-b6d07cc5d63e] Running
	I0904 22:00:51.907619  718003 system_pods.go:89] "storage-provisioner" [82eb6b3c-f3a8-4bc8-af68-10771f976b77] Running
	I0904 22:00:51.907642  718003 retry.go:31] will retry after 416.023679ms: missing components: kube-dns
	I0904 22:00:52.327518  718003 system_pods.go:86] 7 kube-system pods found
	I0904 22:00:52.327549  718003 system_pods.go:89] "coredns-66bc5c9577-xsdj5" [c61291ea-1a68-4588-9c21-4d2fddd5841c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:00:52.327555  718003 system_pods.go:89] "etcd-flannel-364928" [fbd939e4-dd24-4ebe-9a25-f4cecc315787] Running
	I0904 22:00:52.327561  718003 system_pods.go:89] "kube-apiserver-flannel-364928" [e170c8ad-8f30-49ee-909a-2c034bb8cb84] Running
	I0904 22:00:52.327565  718003 system_pods.go:89] "kube-controller-manager-flannel-364928" [e0b88933-1bb3-4b5d-839f-c994622fe176] Running
	I0904 22:00:52.327570  718003 system_pods.go:89] "kube-proxy-6gcgv" [98e7c2fb-9c8f-46b2-8fe9-00fb5f691e35] Running
	I0904 22:00:52.327573  718003 system_pods.go:89] "kube-scheduler-flannel-364928" [df038abc-0fe9-4b20-b9d5-b6d07cc5d63e] Running
	I0904 22:00:52.327577  718003 system_pods.go:89] "storage-provisioner" [82eb6b3c-f3a8-4bc8-af68-10771f976b77] Running
	I0904 22:00:52.327592  718003 retry.go:31] will retry after 589.759743ms: missing components: kube-dns
	I0904 22:00:52.921004  718003 system_pods.go:86] 7 kube-system pods found
	I0904 22:00:52.921042  718003 system_pods.go:89] "coredns-66bc5c9577-xsdj5" [c61291ea-1a68-4588-9c21-4d2fddd5841c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:00:52.921053  718003 system_pods.go:89] "etcd-flannel-364928" [fbd939e4-dd24-4ebe-9a25-f4cecc315787] Running
	I0904 22:00:52.921062  718003 system_pods.go:89] "kube-apiserver-flannel-364928" [e170c8ad-8f30-49ee-909a-2c034bb8cb84] Running
	I0904 22:00:52.921069  718003 system_pods.go:89] "kube-controller-manager-flannel-364928" [e0b88933-1bb3-4b5d-839f-c994622fe176] Running
	I0904 22:00:52.921074  718003 system_pods.go:89] "kube-proxy-6gcgv" [98e7c2fb-9c8f-46b2-8fe9-00fb5f691e35] Running
	I0904 22:00:52.921079  718003 system_pods.go:89] "kube-scheduler-flannel-364928" [df038abc-0fe9-4b20-b9d5-b6d07cc5d63e] Running
	I0904 22:00:52.921084  718003 system_pods.go:89] "storage-provisioner" [82eb6b3c-f3a8-4bc8-af68-10771f976b77] Running
	I0904 22:00:52.921102  718003 retry.go:31] will retry after 599.459014ms: missing components: kube-dns
	I0904 22:00:53.524439  718003 system_pods.go:86] 7 kube-system pods found
	I0904 22:00:53.524470  718003 system_pods.go:89] "coredns-66bc5c9577-xsdj5" [c61291ea-1a68-4588-9c21-4d2fddd5841c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:00:53.524476  718003 system_pods.go:89] "etcd-flannel-364928" [fbd939e4-dd24-4ebe-9a25-f4cecc315787] Running
	I0904 22:00:53.524484  718003 system_pods.go:89] "kube-apiserver-flannel-364928" [e170c8ad-8f30-49ee-909a-2c034bb8cb84] Running
	I0904 22:00:53.524487  718003 system_pods.go:89] "kube-controller-manager-flannel-364928" [e0b88933-1bb3-4b5d-839f-c994622fe176] Running
	I0904 22:00:53.524495  718003 system_pods.go:89] "kube-proxy-6gcgv" [98e7c2fb-9c8f-46b2-8fe9-00fb5f691e35] Running
	I0904 22:00:53.524500  718003 system_pods.go:89] "kube-scheduler-flannel-364928" [df038abc-0fe9-4b20-b9d5-b6d07cc5d63e] Running
	I0904 22:00:53.524503  718003 system_pods.go:89] "storage-provisioner" [82eb6b3c-f3a8-4bc8-af68-10771f976b77] Running
	I0904 22:00:53.524527  718003 retry.go:31] will retry after 1.117785208s: missing components: kube-dns
	I0904 22:00:52.075854  725643 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 22:00:52.088323  725643 certs.go:68] Setting up /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928 for IP: 192.168.94.2
	I0904 22:00:52.088361  725643 certs.go:194] generating shared ca certs ...
	I0904 22:00:52.088390  725643 certs.go:226] acquiring lock for ca certs: {Name:mkab35eaf89c739a644dd45428dbbd1b30c489d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 22:00:52.088561  725643 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21490-384635/.minikube/ca.key
	I0904 22:00:52.088633  725643 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21490-384635/.minikube/proxy-client-ca.key
	I0904 22:00:52.088654  725643 certs.go:256] generating profile certs ...
	I0904 22:00:52.088724  725643 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/client.key
	I0904 22:00:52.088744  725643 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/client.crt with IP's: []
	I0904 22:00:52.147941  725643 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/client.crt ...
	I0904 22:00:52.147972  725643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/client.crt: {Name:mkba7540c100fed0888915e572b6d80906b46359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 22:00:52.148130  725643 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/client.key ...
	I0904 22:00:52.148141  725643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/client.key: {Name:mkd0c182442997f9aa8ab6a6b8658e4f65cbbe00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 22:00:52.148241  725643 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/apiserver.key.5cac4715
	I0904 22:00:52.148270  725643 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/apiserver.crt.5cac4715 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I0904 22:00:52.526926  725643 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/apiserver.crt.5cac4715 ...
	I0904 22:00:52.526954  725643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/apiserver.crt.5cac4715: {Name:mkcbc14de6c1e777100f7645a2847e2cb12945ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 22:00:52.527104  725643 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/apiserver.key.5cac4715 ...
	I0904 22:00:52.527116  725643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/apiserver.key.5cac4715: {Name:mkf82a7b61846a0bb437c8a848686f0e6b9429b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 22:00:52.527183  725643 certs.go:381] copying /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/apiserver.crt.5cac4715 -> /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/apiserver.crt
	I0904 22:00:52.527265  725643 certs.go:385] copying /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/apiserver.key.5cac4715 -> /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/apiserver.key
	I0904 22:00:52.527316  725643 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/proxy-client.key
	I0904 22:00:52.527331  725643 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/proxy-client.crt with IP's: []
	I0904 22:00:52.959678  725643 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/proxy-client.crt ...
	I0904 22:00:52.959710  725643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/proxy-client.crt: {Name:mke6d0b557e9e40bd71441fcfbbe9d49d796afba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 22:00:52.959889  725643 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/proxy-client.key ...
	I0904 22:00:52.959907  725643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/proxy-client.key: {Name:mk0ebd647bc20c5900a8ec663bd477f379fcacd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 22:00:52.960109  725643 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/388360.pem (1338 bytes)
	W0904 22:00:52.960158  725643 certs.go:480] ignoring /home/jenkins/minikube-integration/21490-384635/.minikube/certs/388360_empty.pem, impossibly tiny 0 bytes
	I0904 22:00:52.960175  725643 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca-key.pem (1679 bytes)
	I0904 22:00:52.960215  725643 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/ca.pem (1078 bytes)
	I0904 22:00:52.960255  725643 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/cert.pem (1123 bytes)
	I0904 22:00:52.960283  725643 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/certs/key.pem (1675 bytes)
	I0904 22:00:52.960330  725643 certs.go:484] found cert: /home/jenkins/minikube-integration/21490-384635/.minikube/files/etc/ssl/certs/3883602.pem (1708 bytes)
	I0904 22:00:52.960992  725643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 22:00:52.984818  725643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0904 22:00:53.006464  725643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 22:00:53.028348  725643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0904 22:00:53.049941  725643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0904 22:00:53.071263  725643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0904 22:00:53.093408  725643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 22:00:53.115180  725643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0904 22:00:53.136534  725643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/files/etc/ssl/certs/3883602.pem --> /usr/share/ca-certificates/3883602.pem (1708 bytes)
	I0904 22:00:53.157876  725643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 22:00:53.178736  725643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21490-384635/.minikube/certs/388360.pem --> /usr/share/ca-certificates/388360.pem (1338 bytes)
	I0904 22:00:53.200638  725643 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0904 22:00:53.216451  725643 ssh_runner.go:195] Run: openssl version
	I0904 22:00:53.221277  725643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3883602.pem && ln -fs /usr/share/ca-certificates/3883602.pem /etc/ssl/certs/3883602.pem"
	I0904 22:00:53.229963  725643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3883602.pem
	I0904 22:00:53.233229  725643 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  4 21:07 /usr/share/ca-certificates/3883602.pem
	I0904 22:00:53.233277  725643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3883602.pem
	I0904 22:00:53.239423  725643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3883602.pem /etc/ssl/certs/3ec20f2e.0"
	I0904 22:00:53.248324  725643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 22:00:53.257087  725643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 22:00:53.260046  725643 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  4 20:56 /usr/share/ca-certificates/minikubeCA.pem
	I0904 22:00:53.260088  725643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 22:00:53.266224  725643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 22:00:53.274611  725643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/388360.pem && ln -fs /usr/share/ca-certificates/388360.pem /etc/ssl/certs/388360.pem"
	I0904 22:00:53.283006  725643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/388360.pem
	I0904 22:00:53.286102  725643 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  4 21:07 /usr/share/ca-certificates/388360.pem
	I0904 22:00:53.286141  725643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/388360.pem
	I0904 22:00:53.292491  725643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/388360.pem /etc/ssl/certs/51391683.0"
	I0904 22:00:53.300835  725643 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 22:00:53.303630  725643 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0904 22:00:53.303681  725643 kubeadm.go:392] StartCluster: {Name:bridge-364928 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-364928 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 22:00:53.303741  725643 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0904 22:00:53.303777  725643 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0904 22:00:53.336824  725643 cri.go:89] found id: ""
	I0904 22:00:53.336901  725643 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0904 22:00:53.344949  725643 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0904 22:00:53.353109  725643 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0904 22:00:53.353161  725643 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0904 22:00:53.360930  725643 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0904 22:00:53.360957  725643 kubeadm.go:157] found existing configuration files:
	
	I0904 22:00:53.360995  725643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0904 22:00:53.368620  725643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0904 22:00:53.368663  725643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0904 22:00:53.376353  725643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0904 22:00:53.385602  725643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0904 22:00:53.385658  725643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0904 22:00:53.394864  725643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0904 22:00:53.402681  725643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0904 22:00:53.402723  725643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0904 22:00:53.410219  725643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0904 22:00:53.417877  725643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0904 22:00:53.417921  725643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0904 22:00:53.425251  725643 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0904 22:00:53.478768  725643 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0904 22:00:53.479060  725643 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
	I0904 22:00:53.536925  725643 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0904 22:00:54.437978  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:00:56.937556  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	I0904 22:00:54.646509  718003 system_pods.go:86] 7 kube-system pods found
	I0904 22:00:54.646544  718003 system_pods.go:89] "coredns-66bc5c9577-xsdj5" [c61291ea-1a68-4588-9c21-4d2fddd5841c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:00:54.646551  718003 system_pods.go:89] "etcd-flannel-364928" [fbd939e4-dd24-4ebe-9a25-f4cecc315787] Running
	I0904 22:00:54.646558  718003 system_pods.go:89] "kube-apiserver-flannel-364928" [e170c8ad-8f30-49ee-909a-2c034bb8cb84] Running
	I0904 22:00:54.646562  718003 system_pods.go:89] "kube-controller-manager-flannel-364928" [e0b88933-1bb3-4b5d-839f-c994622fe176] Running
	I0904 22:00:54.646565  718003 system_pods.go:89] "kube-proxy-6gcgv" [98e7c2fb-9c8f-46b2-8fe9-00fb5f691e35] Running
	I0904 22:00:54.646569  718003 system_pods.go:89] "kube-scheduler-flannel-364928" [df038abc-0fe9-4b20-b9d5-b6d07cc5d63e] Running
	I0904 22:00:54.646572  718003 system_pods.go:89] "storage-provisioner" [82eb6b3c-f3a8-4bc8-af68-10771f976b77] Running
	I0904 22:00:54.646587  718003 retry.go:31] will retry after 1.326366412s: missing components: kube-dns
	I0904 22:00:55.976987  718003 system_pods.go:86] 7 kube-system pods found
	I0904 22:00:55.977020  718003 system_pods.go:89] "coredns-66bc5c9577-xsdj5" [c61291ea-1a68-4588-9c21-4d2fddd5841c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:00:55.977028  718003 system_pods.go:89] "etcd-flannel-364928" [fbd939e4-dd24-4ebe-9a25-f4cecc315787] Running
	I0904 22:00:55.977034  718003 system_pods.go:89] "kube-apiserver-flannel-364928" [e170c8ad-8f30-49ee-909a-2c034bb8cb84] Running
	I0904 22:00:55.977038  718003 system_pods.go:89] "kube-controller-manager-flannel-364928" [e0b88933-1bb3-4b5d-839f-c994622fe176] Running
	I0904 22:00:55.977044  718003 system_pods.go:89] "kube-proxy-6gcgv" [98e7c2fb-9c8f-46b2-8fe9-00fb5f691e35] Running
	I0904 22:00:55.977049  718003 system_pods.go:89] "kube-scheduler-flannel-364928" [df038abc-0fe9-4b20-b9d5-b6d07cc5d63e] Running
	I0904 22:00:55.977054  718003 system_pods.go:89] "storage-provisioner" [82eb6b3c-f3a8-4bc8-af68-10771f976b77] Running
	I0904 22:00:55.977074  718003 retry.go:31] will retry after 1.650931689s: missing components: kube-dns
	I0904 22:00:57.632745  718003 system_pods.go:86] 7 kube-system pods found
	I0904 22:00:57.632809  718003 system_pods.go:89] "coredns-66bc5c9577-xsdj5" [c61291ea-1a68-4588-9c21-4d2fddd5841c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:00:57.632817  718003 system_pods.go:89] "etcd-flannel-364928" [fbd939e4-dd24-4ebe-9a25-f4cecc315787] Running
	I0904 22:00:57.632826  718003 system_pods.go:89] "kube-apiserver-flannel-364928" [e170c8ad-8f30-49ee-909a-2c034bb8cb84] Running
	I0904 22:00:57.632832  718003 system_pods.go:89] "kube-controller-manager-flannel-364928" [e0b88933-1bb3-4b5d-839f-c994622fe176] Running
	I0904 22:00:57.632839  718003 system_pods.go:89] "kube-proxy-6gcgv" [98e7c2fb-9c8f-46b2-8fe9-00fb5f691e35] Running
	I0904 22:00:57.632846  718003 system_pods.go:89] "kube-scheduler-flannel-364928" [df038abc-0fe9-4b20-b9d5-b6d07cc5d63e] Running
	I0904 22:00:57.632852  718003 system_pods.go:89] "storage-provisioner" [82eb6b3c-f3a8-4bc8-af68-10771f976b77] Running
	I0904 22:00:57.632874  718003 retry.go:31] will retry after 1.867355783s: missing components: kube-dns
	W0904 22:00:59.437273  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:01.437472  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:03.437854  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	I0904 22:00:59.504432  718003 system_pods.go:86] 7 kube-system pods found
	I0904 22:00:59.504464  718003 system_pods.go:89] "coredns-66bc5c9577-xsdj5" [c61291ea-1a68-4588-9c21-4d2fddd5841c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:00:59.504470  718003 system_pods.go:89] "etcd-flannel-364928" [fbd939e4-dd24-4ebe-9a25-f4cecc315787] Running
	I0904 22:00:59.504477  718003 system_pods.go:89] "kube-apiserver-flannel-364928" [e170c8ad-8f30-49ee-909a-2c034bb8cb84] Running
	I0904 22:00:59.504481  718003 system_pods.go:89] "kube-controller-manager-flannel-364928" [e0b88933-1bb3-4b5d-839f-c994622fe176] Running
	I0904 22:00:59.504484  718003 system_pods.go:89] "kube-proxy-6gcgv" [98e7c2fb-9c8f-46b2-8fe9-00fb5f691e35] Running
	I0904 22:00:59.504487  718003 system_pods.go:89] "kube-scheduler-flannel-364928" [df038abc-0fe9-4b20-b9d5-b6d07cc5d63e] Running
	I0904 22:00:59.504490  718003 system_pods.go:89] "storage-provisioner" [82eb6b3c-f3a8-4bc8-af68-10771f976b77] Running
	I0904 22:00:59.504507  718003 retry.go:31] will retry after 2.650552146s: missing components: kube-dns
	I0904 22:01:02.160493  718003 system_pods.go:86] 7 kube-system pods found
	I0904 22:01:02.160537  718003 system_pods.go:89] "coredns-66bc5c9577-xsdj5" [c61291ea-1a68-4588-9c21-4d2fddd5841c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:01:02.160545  718003 system_pods.go:89] "etcd-flannel-364928" [fbd939e4-dd24-4ebe-9a25-f4cecc315787] Running
	I0904 22:01:02.160553  718003 system_pods.go:89] "kube-apiserver-flannel-364928" [e170c8ad-8f30-49ee-909a-2c034bb8cb84] Running
	I0904 22:01:02.160558  718003 system_pods.go:89] "kube-controller-manager-flannel-364928" [e0b88933-1bb3-4b5d-839f-c994622fe176] Running
	I0904 22:01:02.160565  718003 system_pods.go:89] "kube-proxy-6gcgv" [98e7c2fb-9c8f-46b2-8fe9-00fb5f691e35] Running
	I0904 22:01:02.160572  718003 system_pods.go:89] "kube-scheduler-flannel-364928" [df038abc-0fe9-4b20-b9d5-b6d07cc5d63e] Running
	I0904 22:01:02.160579  718003 system_pods.go:89] "storage-provisioner" [82eb6b3c-f3a8-4bc8-af68-10771f976b77] Running
	I0904 22:01:02.160597  718003 retry.go:31] will retry after 2.230843332s: missing components: kube-dns
	I0904 22:01:04.396266  718003 system_pods.go:86] 7 kube-system pods found
	I0904 22:01:04.396299  718003 system_pods.go:89] "coredns-66bc5c9577-xsdj5" [c61291ea-1a68-4588-9c21-4d2fddd5841c] Running
	I0904 22:01:04.396308  718003 system_pods.go:89] "etcd-flannel-364928" [fbd939e4-dd24-4ebe-9a25-f4cecc315787] Running
	I0904 22:01:04.396316  718003 system_pods.go:89] "kube-apiserver-flannel-364928" [e170c8ad-8f30-49ee-909a-2c034bb8cb84] Running
	I0904 22:01:04.396322  718003 system_pods.go:89] "kube-controller-manager-flannel-364928" [e0b88933-1bb3-4b5d-839f-c994622fe176] Running
	I0904 22:01:04.396326  718003 system_pods.go:89] "kube-proxy-6gcgv" [98e7c2fb-9c8f-46b2-8fe9-00fb5f691e35] Running
	I0904 22:01:04.396330  718003 system_pods.go:89] "kube-scheduler-flannel-364928" [df038abc-0fe9-4b20-b9d5-b6d07cc5d63e] Running
	I0904 22:01:04.396334  718003 system_pods.go:89] "storage-provisioner" [82eb6b3c-f3a8-4bc8-af68-10771f976b77] Running
	I0904 22:01:04.396346  718003 system_pods.go:126] duration metric: took 13.416536037s to wait for k8s-apps to be running ...
	I0904 22:01:04.396362  718003 system_svc.go:44] waiting for kubelet service to be running ....
	I0904 22:01:04.396415  718003 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 22:01:04.408216  718003 system_svc.go:56] duration metric: took 11.844154ms WaitForService to wait for kubelet
	I0904 22:01:04.408242  718003 kubeadm.go:578] duration metric: took 34.415258584s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 22:01:04.408261  718003 node_conditions.go:102] verifying NodePressure condition ...
	I0904 22:01:04.411053  718003 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0904 22:01:04.411082  718003 node_conditions.go:123] node cpu capacity is 8
	I0904 22:01:04.411106  718003 node_conditions.go:105] duration metric: took 2.840641ms to run NodePressure ...
	I0904 22:01:04.411123  718003 start.go:241] waiting for startup goroutines ...
	I0904 22:01:04.411137  718003 start.go:246] waiting for cluster config update ...
	I0904 22:01:04.411156  718003 start.go:255] writing updated cluster config ...
	I0904 22:01:04.411440  718003 ssh_runner.go:195] Run: rm -f paused
	I0904 22:01:04.414598  718003 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 22:01:04.418051  718003 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xsdj5" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:04.422193  718003 pod_ready.go:94] pod "coredns-66bc5c9577-xsdj5" is "Ready"
	I0904 22:01:04.422213  718003 pod_ready.go:86] duration metric: took 4.141947ms for pod "coredns-66bc5c9577-xsdj5" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:04.424148  718003 pod_ready.go:83] waiting for pod "etcd-flannel-364928" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:04.427889  718003 pod_ready.go:94] pod "etcd-flannel-364928" is "Ready"
	I0904 22:01:04.427910  718003 pod_ready.go:86] duration metric: took 3.744027ms for pod "etcd-flannel-364928" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:04.429825  718003 pod_ready.go:83] waiting for pod "kube-apiserver-flannel-364928" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:04.433469  718003 pod_ready.go:94] pod "kube-apiserver-flannel-364928" is "Ready"
	I0904 22:01:04.433487  718003 pod_ready.go:86] duration metric: took 3.630328ms for pod "kube-apiserver-flannel-364928" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:04.435175  718003 pod_ready.go:83] waiting for pod "kube-controller-manager-flannel-364928" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:05.609987  725643 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0904 22:01:05.610059  725643 kubeadm.go:310] [preflight] Running pre-flight checks
	I0904 22:01:05.610173  725643 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0904 22:01:05.610251  725643 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1083-gcp
	I0904 22:01:05.610283  725643 kubeadm.go:310] OS: Linux
	I0904 22:01:05.610354  725643 kubeadm.go:310] CGROUPS_CPU: enabled
	I0904 22:01:05.610415  725643 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0904 22:01:05.610517  725643 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0904 22:01:05.610610  725643 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0904 22:01:05.610672  725643 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0904 22:01:05.610746  725643 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0904 22:01:05.610848  725643 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0904 22:01:05.610924  725643 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0904 22:01:05.611009  725643 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0904 22:01:05.611105  725643 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0904 22:01:05.611207  725643 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0904 22:01:05.611351  725643 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0904 22:01:05.611435  725643 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0904 22:01:05.612866  725643 out.go:252]   - Generating certificates and keys ...
	I0904 22:01:05.612960  725643 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0904 22:01:05.613030  725643 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0904 22:01:05.613114  725643 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0904 22:01:05.613196  725643 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0904 22:01:05.613295  725643 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0904 22:01:05.613368  725643 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0904 22:01:05.613436  725643 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0904 22:01:05.613598  725643 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-364928 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0904 22:01:05.613681  725643 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0904 22:01:05.613818  725643 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-364928 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0904 22:01:05.613914  725643 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0904 22:01:05.614023  725643 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0904 22:01:05.614075  725643 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0904 22:01:05.614147  725643 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0904 22:01:05.614214  725643 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0904 22:01:05.614285  725643 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0904 22:01:05.614356  725643 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0904 22:01:05.614454  725643 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0904 22:01:05.614540  725643 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0904 22:01:05.614642  725643 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0904 22:01:05.614711  725643 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0904 22:01:05.615990  725643 out.go:252]   - Booting up control plane ...
	I0904 22:01:05.616070  725643 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0904 22:01:05.616152  725643 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0904 22:01:05.616240  725643 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0904 22:01:05.616363  725643 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0904 22:01:05.616467  725643 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0904 22:01:05.616605  725643 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0904 22:01:05.616724  725643 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0904 22:01:05.616826  725643 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0904 22:01:05.617011  725643 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0904 22:01:05.617172  725643 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0904 22:01:05.617263  725643 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501462395s
	I0904 22:01:05.617399  725643 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0904 22:01:05.617504  725643 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I0904 22:01:05.617635  725643 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0904 22:01:05.617763  725643 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0904 22:01:05.617887  725643 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 4.662270765s
	I0904 22:01:05.617988  725643 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 5.112965714s
	I0904 22:01:05.618071  725643 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.001393475s
	I0904 22:01:05.618233  725643 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0904 22:01:05.618406  725643 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0904 22:01:05.618503  725643 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0904 22:01:05.618769  725643 kubeadm.go:310] [mark-control-plane] Marking the node bridge-364928 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0904 22:01:05.618848  725643 kubeadm.go:310] [bootstrap-token] Using token: ceznz5.mk1uab4zkkryxz7h
	I0904 22:01:05.620043  725643 out.go:252]   - Configuring RBAC rules ...
	I0904 22:01:05.620133  725643 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0904 22:01:05.620226  725643 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0904 22:01:05.620353  725643 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0904 22:01:05.620460  725643 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0904 22:01:05.620567  725643 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0904 22:01:05.620644  725643 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0904 22:01:05.620824  725643 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0904 22:01:05.620897  725643 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0904 22:01:05.620965  725643 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0904 22:01:05.620974  725643 kubeadm.go:310] 
	I0904 22:01:05.621057  725643 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0904 22:01:05.621064  725643 kubeadm.go:310] 
	I0904 22:01:05.621159  725643 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0904 22:01:05.621170  725643 kubeadm.go:310] 
	I0904 22:01:05.621212  725643 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0904 22:01:05.621303  725643 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0904 22:01:05.621378  725643 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0904 22:01:05.621386  725643 kubeadm.go:310] 
	I0904 22:01:05.621459  725643 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0904 22:01:05.621471  725643 kubeadm.go:310] 
	I0904 22:01:05.621533  725643 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0904 22:01:05.621546  725643 kubeadm.go:310] 
	I0904 22:01:05.621628  725643 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0904 22:01:05.621734  725643 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0904 22:01:05.621829  725643 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0904 22:01:05.621838  725643 kubeadm.go:310] 
	I0904 22:01:05.621964  725643 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0904 22:01:05.622063  725643 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0904 22:01:05.622072  725643 kubeadm.go:310] 
	I0904 22:01:05.622184  725643 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ceznz5.mk1uab4zkkryxz7h \
	I0904 22:01:05.622338  725643 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:27309edecc409b6b2cac280161d86abdcb5cc9f2633f123f165aab1e20ce61c4 \
	I0904 22:01:05.622374  725643 kubeadm.go:310] 	--control-plane 
	I0904 22:01:05.622384  725643 kubeadm.go:310] 
	I0904 22:01:05.622481  725643 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0904 22:01:05.622489  725643 kubeadm.go:310] 
	I0904 22:01:05.622587  725643 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ceznz5.mk1uab4zkkryxz7h \
	I0904 22:01:05.622720  725643 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:27309edecc409b6b2cac280161d86abdcb5cc9f2633f123f165aab1e20ce61c4 
	I0904 22:01:05.622734  725643 cni.go:84] Creating CNI manager for "bridge"
	I0904 22:01:05.624222  725643 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0904 22:01:04.819522  718003 pod_ready.go:94] pod "kube-controller-manager-flannel-364928" is "Ready"
	I0904 22:01:04.819552  718003 pod_ready.go:86] duration metric: took 384.35874ms for pod "kube-controller-manager-flannel-364928" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:05.019215  718003 pod_ready.go:83] waiting for pod "kube-proxy-6gcgv" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:05.419377  718003 pod_ready.go:94] pod "kube-proxy-6gcgv" is "Ready"
	I0904 22:01:05.419405  718003 pod_ready.go:86] duration metric: took 400.163306ms for pod "kube-proxy-6gcgv" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:05.619621  718003 pod_ready.go:83] waiting for pod "kube-scheduler-flannel-364928" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:06.018394  718003 pod_ready.go:94] pod "kube-scheduler-flannel-364928" is "Ready"
	I0904 22:01:06.018422  718003 pod_ready.go:86] duration metric: took 398.77956ms for pod "kube-scheduler-flannel-364928" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:06.018432  718003 pod_ready.go:40] duration metric: took 1.603804111s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 22:01:06.063014  718003 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0904 22:01:06.064519  718003 out.go:179] * Done! kubectl is now configured to use "flannel-364928" cluster and "default" namespace by default
	I0904 22:01:05.625365  725643 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0904 22:01:05.634875  725643 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0904 22:01:05.652006  725643 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0904 22:01:05.652102  725643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 22:01:05.652107  725643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-364928 minikube.k8s.io/updated_at=2025_09_04T22_01_05_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=d82f852837f628b3930700b19196c39855cd258a minikube.k8s.io/name=bridge-364928 minikube.k8s.io/primary=true
	I0904 22:01:05.661462  725643 ops.go:34] apiserver oom_adj: -16
	I0904 22:01:05.773298  725643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 22:01:06.273667  725643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 22:01:06.773501  725643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W0904 22:01:05.937387  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:07.937974  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	I0904 22:01:07.273459  725643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 22:01:07.773661  725643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 22:01:08.273960  725643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 22:01:08.773999  725643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 22:01:09.273963  725643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 22:01:09.773954  725643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 22:01:10.274093  725643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 22:01:10.340605  725643 kubeadm.go:1105] duration metric: took 4.688564247s to wait for elevateKubeSystemPrivileges
	I0904 22:01:10.340646  725643 kubeadm.go:394] duration metric: took 17.036968675s to StartCluster
	I0904 22:01:10.340667  725643 settings.go:142] acquiring lock: {Name:mke06342cfb6705345a5c7324f763dc44aea4569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 22:01:10.340738  725643 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21490-384635/kubeconfig
	I0904 22:01:10.343387  725643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/kubeconfig: {Name:mk6b311573f3fade9cba8f894d5c9f5ca76d1e25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 22:01:10.343785  725643 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 22:01:10.344421  725643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0904 22:01:10.344491  725643 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0904 22:01:10.344901  725643 addons.go:69] Setting default-storageclass=true in profile "bridge-364928"
	I0904 22:01:10.344912  725643 addons.go:69] Setting storage-provisioner=true in profile "bridge-364928"
	I0904 22:01:10.344940  725643 addons.go:238] Setting addon storage-provisioner=true in "bridge-364928"
	I0904 22:01:10.344941  725643 config.go:182] Loaded profile config "bridge-364928": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 22:01:10.344937  725643 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-364928"
	I0904 22:01:10.345290  725643 host.go:66] Checking if "bridge-364928" exists ...
	I0904 22:01:10.345602  725643 cli_runner.go:164] Run: docker container inspect bridge-364928 --format={{.State.Status}}
	I0904 22:01:10.345827  725643 cli_runner.go:164] Run: docker container inspect bridge-364928 --format={{.State.Status}}
	I0904 22:01:10.346392  725643 out.go:179] * Verifying Kubernetes components...
	I0904 22:01:10.347519  725643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 22:01:10.369352  725643 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 22:01:10.370474  725643 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 22:01:10.370498  725643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0904 22:01:10.370548  725643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-364928
	I0904 22:01:10.375315  725643 addons.go:238] Setting addon default-storageclass=true in "bridge-364928"
	I0904 22:01:10.375361  725643 host.go:66] Checking if "bridge-364928" exists ...
	I0904 22:01:10.375809  725643 cli_runner.go:164] Run: docker container inspect bridge-364928 --format={{.State.Status}}
	I0904 22:01:10.400197  725643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/bridge-364928/id_rsa Username:docker}
	I0904 22:01:10.410137  725643 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0904 22:01:10.410164  725643 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0904 22:01:10.410219  725643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-364928
	I0904 22:01:10.427828  725643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/bridge-364928/id_rsa Username:docker}
	I0904 22:01:10.472181  725643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0904 22:01:10.555482  725643 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 22:01:10.569140  725643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0904 22:01:10.645600  725643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 22:01:11.167566  725643 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I0904 22:01:11.170739  725643 node_ready.go:35] waiting up to 15m0s for node "bridge-364928" to be "Ready" ...
	I0904 22:01:11.185103  725643 node_ready.go:49] node "bridge-364928" is "Ready"
	I0904 22:01:11.185130  725643 node_ready.go:38] duration metric: took 14.362304ms for node "bridge-364928" to be "Ready" ...
	I0904 22:01:11.185142  725643 api_server.go:52] waiting for apiserver process to appear ...
	I0904 22:01:11.185181  725643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 22:01:11.653679  725643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.008032734s)
	I0904 22:01:11.654120  725643 api_server.go:72] duration metric: took 1.310300353s to wait for apiserver process to appear ...
	I0904 22:01:11.654146  725643 api_server.go:88] waiting for apiserver healthz status ...
	I0904 22:01:11.654182  725643 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0904 22:01:11.656116  725643 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I0904 22:01:11.657567  725643 addons.go:514] duration metric: took 1.313087435s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0904 22:01:11.664106  725643 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0904 22:01:11.665281  725643 api_server.go:141] control plane version: v1.34.0
	I0904 22:01:11.665350  725643 api_server.go:131] duration metric: took 11.19419ms to wait for apiserver health ...
	I0904 22:01:11.665375  725643 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 22:01:11.670504  725643 system_pods.go:59] 8 kube-system pods found
	I0904 22:01:11.670580  725643 system_pods.go:61] "coredns-66bc5c9577-27hq7" [24c57b5d-e1c8-4b78-af23-ff35e32521fc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:01:11.670602  725643 system_pods.go:61] "coredns-66bc5c9577-5vtqt" [33265747-58b4-43d7-ac2d-5e64a076a7ca] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:01:11.670635  725643 system_pods.go:61] "etcd-bridge-364928" [08f011c1-a3f0-461f-b653-b935e1d4109a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 22:01:11.670662  725643 system_pods.go:61] "kube-apiserver-bridge-364928" [ac802c39-e179-4f40-a5be-0336a2f5ef17] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 22:01:11.670680  725643 system_pods.go:61] "kube-controller-manager-bridge-364928" [9270e531-bc78-4742-bcc1-95a87b90fbc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0904 22:01:11.670701  725643 system_pods.go:61] "kube-proxy-77sc2" [e727f448-5f92-4de0-bedf-0d85d90fe1be] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0904 22:01:11.670716  725643 system_pods.go:61] "kube-scheduler-bridge-364928" [e49ad4ee-4dcd-456c-8a05-40105005827b] Running
	I0904 22:01:11.670742  725643 system_pods.go:61] "storage-provisioner" [18d36717-28bc-478f-9969-3e84f691bb3c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 22:01:11.670765  725643 system_pods.go:74] duration metric: took 5.374761ms to wait for pod list to return data ...
	I0904 22:01:11.670784  725643 default_sa.go:34] waiting for default service account to be created ...
	I0904 22:01:11.671906  725643 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-364928" context rescaled to 1 replicas
	I0904 22:01:11.673734  725643 default_sa.go:45] found service account: "default"
	I0904 22:01:11.673751  725643 default_sa.go:55] duration metric: took 2.952919ms for default service account to be created ...
	I0904 22:01:11.673758  725643 system_pods.go:116] waiting for k8s-apps to be running ...
	I0904 22:01:11.677018  725643 system_pods.go:86] 8 kube-system pods found
	I0904 22:01:11.677048  725643 system_pods.go:89] "coredns-66bc5c9577-27hq7" [24c57b5d-e1c8-4b78-af23-ff35e32521fc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:01:11.677074  725643 system_pods.go:89] "coredns-66bc5c9577-5vtqt" [33265747-58b4-43d7-ac2d-5e64a076a7ca] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:01:11.677088  725643 system_pods.go:89] "etcd-bridge-364928" [08f011c1-a3f0-461f-b653-b935e1d4109a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 22:01:11.677098  725643 system_pods.go:89] "kube-apiserver-bridge-364928" [ac802c39-e179-4f40-a5be-0336a2f5ef17] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 22:01:11.677112  725643 system_pods.go:89] "kube-controller-manager-bridge-364928" [9270e531-bc78-4742-bcc1-95a87b90fbc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0904 22:01:11.677123  725643 system_pods.go:89] "kube-proxy-77sc2" [e727f448-5f92-4de0-bedf-0d85d90fe1be] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0904 22:01:11.677129  725643 system_pods.go:89] "kube-scheduler-bridge-364928" [e49ad4ee-4dcd-456c-8a05-40105005827b] Running
	I0904 22:01:11.677156  725643 system_pods.go:89] "storage-provisioner" [18d36717-28bc-478f-9969-3e84f691bb3c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 22:01:11.677195  725643 retry.go:31] will retry after 187.908151ms: missing components: kube-dns, kube-proxy
	I0904 22:01:11.869501  725643 system_pods.go:86] 8 kube-system pods found
	I0904 22:01:11.869539  725643 system_pods.go:89] "coredns-66bc5c9577-27hq7" [24c57b5d-e1c8-4b78-af23-ff35e32521fc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:01:11.869546  725643 system_pods.go:89] "coredns-66bc5c9577-5vtqt" [33265747-58b4-43d7-ac2d-5e64a076a7ca] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:01:11.869554  725643 system_pods.go:89] "etcd-bridge-364928" [08f011c1-a3f0-461f-b653-b935e1d4109a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 22:01:11.869559  725643 system_pods.go:89] "kube-apiserver-bridge-364928" [ac802c39-e179-4f40-a5be-0336a2f5ef17] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 22:01:11.869566  725643 system_pods.go:89] "kube-controller-manager-bridge-364928" [9270e531-bc78-4742-bcc1-95a87b90fbc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0904 22:01:11.869571  725643 system_pods.go:89] "kube-proxy-77sc2" [e727f448-5f92-4de0-bedf-0d85d90fe1be] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0904 22:01:11.869575  725643 system_pods.go:89] "kube-scheduler-bridge-364928" [e49ad4ee-4dcd-456c-8a05-40105005827b] Running
	I0904 22:01:11.869580  725643 system_pods.go:89] "storage-provisioner" [18d36717-28bc-478f-9969-3e84f691bb3c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 22:01:11.869595  725643 retry.go:31] will retry after 304.60066ms: missing components: kube-dns, kube-proxy
	W0904 22:01:10.438106  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:12.937710  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	I0904 22:01:12.178971  725643 system_pods.go:86] 8 kube-system pods found
	I0904 22:01:12.179012  725643 system_pods.go:89] "coredns-66bc5c9577-27hq7" [24c57b5d-e1c8-4b78-af23-ff35e32521fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:01:12.179039  725643 system_pods.go:89] "coredns-66bc5c9577-5vtqt" [33265747-58b4-43d7-ac2d-5e64a076a7ca] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 22:01:12.179051  725643 system_pods.go:89] "etcd-bridge-364928" [08f011c1-a3f0-461f-b653-b935e1d4109a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 22:01:12.179065  725643 system_pods.go:89] "kube-apiserver-bridge-364928" [ac802c39-e179-4f40-a5be-0336a2f5ef17] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 22:01:12.179072  725643 system_pods.go:89] "kube-controller-manager-bridge-364928" [9270e531-bc78-4742-bcc1-95a87b90fbc8] Running
	I0904 22:01:12.179079  725643 system_pods.go:89] "kube-proxy-77sc2" [e727f448-5f92-4de0-bedf-0d85d90fe1be] Running
	I0904 22:01:12.179084  725643 system_pods.go:89] "kube-scheduler-bridge-364928" [e49ad4ee-4dcd-456c-8a05-40105005827b] Running
	I0904 22:01:12.179091  725643 system_pods.go:89] "storage-provisioner" [18d36717-28bc-478f-9969-3e84f691bb3c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 22:01:12.179103  725643 system_pods.go:126] duration metric: took 505.338198ms to wait for k8s-apps to be running ...
	I0904 22:01:12.179118  725643 system_svc.go:44] waiting for kubelet service to be running ....
	I0904 22:01:12.179172  725643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 22:01:12.192434  725643 system_svc.go:56] duration metric: took 13.3042ms WaitForService to wait for kubelet
	I0904 22:01:12.192478  725643 kubeadm.go:578] duration metric: took 1.84865365s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 22:01:12.192504  725643 node_conditions.go:102] verifying NodePressure condition ...
	I0904 22:01:12.196327  725643 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0904 22:01:12.196360  725643 node_conditions.go:123] node cpu capacity is 8
	I0904 22:01:12.196375  725643 node_conditions.go:105] duration metric: took 3.864833ms to run NodePressure ...
	I0904 22:01:12.196390  725643 start.go:241] waiting for startup goroutines ...
	I0904 22:01:12.196405  725643 start.go:246] waiting for cluster config update ...
	I0904 22:01:12.196422  725643 start.go:255] writing updated cluster config ...
	I0904 22:01:12.196780  725643 ssh_runner.go:195] Run: rm -f paused
	I0904 22:01:12.200208  725643 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 22:01:12.203813  725643 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-27hq7" in "kube-system" namespace to be "Ready" or be gone ...
	W0904 22:01:14.207947  725643 pod_ready.go:104] pod "coredns-66bc5c9577-27hq7" is not "Ready", error: <nil>
	W0904 22:01:16.208606  725643 pod_ready.go:104] pod "coredns-66bc5c9577-27hq7" is not "Ready", error: <nil>
	W0904 22:01:14.937965  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:17.437507  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:18.208703  725643 pod_ready.go:104] pod "coredns-66bc5c9577-27hq7" is not "Ready", error: <nil>
	W0904 22:01:20.209541  725643 pod_ready.go:104] pod "coredns-66bc5c9577-27hq7" is not "Ready", error: <nil>
	W0904 22:01:19.438103  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:21.937987  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	I0904 22:01:22.206461  725643 pod_ready.go:99] pod "coredns-66bc5c9577-27hq7" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-27hq7" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-27hq7" not found
	I0904 22:01:22.206487  725643 pod_ready.go:86] duration metric: took 10.002650012s for pod "coredns-66bc5c9577-27hq7" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:22.206507  725643 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5vtqt" in "kube-system" namespace to be "Ready" or be gone ...
	W0904 22:01:24.211280  725643 pod_ready.go:104] pod "coredns-66bc5c9577-5vtqt" is not "Ready", error: <nil>
	W0904 22:01:26.211688  725643 pod_ready.go:104] pod "coredns-66bc5c9577-5vtqt" is not "Ready", error: <nil>
	W0904 22:01:23.938056  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:26.437578  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:28.438261  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:28.212139  725643 pod_ready.go:104] pod "coredns-66bc5c9577-5vtqt" is not "Ready", error: <nil>
	W0904 22:01:30.712196  725643 pod_ready.go:104] pod "coredns-66bc5c9577-5vtqt" is not "Ready", error: <nil>
	W0904 22:01:30.937722  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:33.437293  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:33.211435  725643 pod_ready.go:104] pod "coredns-66bc5c9577-5vtqt" is not "Ready", error: <nil>
	W0904 22:01:35.212002  725643 pod_ready.go:104] pod "coredns-66bc5c9577-5vtqt" is not "Ready", error: <nil>
	W0904 22:01:35.437642  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:37.937744  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:37.712311  725643 pod_ready.go:104] pod "coredns-66bc5c9577-5vtqt" is not "Ready", error: <nil>
	W0904 22:01:40.211486  725643 pod_ready.go:104] pod "coredns-66bc5c9577-5vtqt" is not "Ready", error: <nil>
	W0904 22:01:40.437678  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:42.437923  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:42.212241  725643 pod_ready.go:104] pod "coredns-66bc5c9577-5vtqt" is not "Ready", error: <nil>
	I0904 22:01:43.212168  725643 pod_ready.go:94] pod "coredns-66bc5c9577-5vtqt" is "Ready"
	I0904 22:01:43.212195  725643 pod_ready.go:86] duration metric: took 21.005679647s for pod "coredns-66bc5c9577-5vtqt" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:43.214551  725643 pod_ready.go:83] waiting for pod "etcd-bridge-364928" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:43.218080  725643 pod_ready.go:94] pod "etcd-bridge-364928" is "Ready"
	I0904 22:01:43.218103  725643 pod_ready.go:86] duration metric: took 3.531186ms for pod "etcd-bridge-364928" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:43.219887  725643 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-364928" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:43.223418  725643 pod_ready.go:94] pod "kube-apiserver-bridge-364928" is "Ready"
	I0904 22:01:43.223435  725643 pod_ready.go:86] duration metric: took 3.53062ms for pod "kube-apiserver-bridge-364928" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:43.225015  725643 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-364928" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:43.410438  725643 pod_ready.go:94] pod "kube-controller-manager-bridge-364928" is "Ready"
	I0904 22:01:43.410467  725643 pod_ready.go:86] duration metric: took 185.434485ms for pod "kube-controller-manager-bridge-364928" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:43.610582  725643 pod_ready.go:83] waiting for pod "kube-proxy-77sc2" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:44.011194  725643 pod_ready.go:94] pod "kube-proxy-77sc2" is "Ready"
	I0904 22:01:44.011223  725643 pod_ready.go:86] duration metric: took 400.613464ms for pod "kube-proxy-77sc2" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:44.211137  725643 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-364928" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:44.611081  725643 pod_ready.go:94] pod "kube-scheduler-bridge-364928" is "Ready"
	I0904 22:01:44.611106  725643 pod_ready.go:86] duration metric: took 399.940376ms for pod "kube-scheduler-bridge-364928" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 22:01:44.611116  725643 pod_ready.go:40] duration metric: took 32.410883034s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 22:01:44.654298  725643 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0904 22:01:44.655877  725643 out.go:179] * Done! kubectl is now configured to use "bridge-364928" cluster and "default" namespace by default
	W0904 22:01:44.437987  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:46.937265  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:49.436866  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:51.437349  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:53.438359  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:55.937214  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:01:58.437298  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:00.938071  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:03.437903  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:05.937716  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:08.437864  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:10.438463  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:12.938322  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:15.437650  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:17.438523  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:19.937736  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:21.937952  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:24.437850  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:26.937448  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:29.437316  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:31.438207  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:33.938001  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:36.437346  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:38.437594  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:40.937374  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:42.937487  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:44.937549  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:47.437717  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:49.937580  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:51.937802  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:54.437967  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:56.937548  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:02:59.437491  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:01.437951  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:03.937792  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:06.437308  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:08.438195  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:10.936974  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:12.937521  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:14.938030  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:17.438133  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:19.937883  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:22.437545  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:24.937562  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:26.937796  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:29.438021  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:31.937501  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:33.937898  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:36.438176  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:38.937161  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:40.938107  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:43.437266  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:45.437429  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:47.937225  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:50.437223  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:52.937254  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:55.437375  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:57.438418  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:03:59.937385  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:02.437352  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:04.437658  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:06.937189  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:09.437373  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:11.437468  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:13.937204  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:16.437206  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:18.437886  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:20.438103  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:22.937047  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:24.937427  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:27.437105  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:29.437276  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:31.437820  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:33.937680  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:35.937736  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:38.437903  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:40.937618  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:42.937757  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:45.437774  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:47.937195  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:50.437564  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:52.437750  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:54.936920  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:56.937154  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:04:58.937443  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:01.437319  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:03.437808  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:05.937874  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:08.438039  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:10.937114  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:12.937617  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:15.437197  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:17.937966  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:20.436943  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:22.437112  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:24.437935  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:26.937596  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:28.938029  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:31.437927  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:33.937288  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:35.937367  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:37.937895  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:40.438111  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:42.937140  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:45.437042  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:47.437615  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:49.937478  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:51.937528  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:54.437233  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:56.437925  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:05:58.936789  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:00.937673  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:02.938006  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:05.437466  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:07.936896  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:09.936963  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:11.937136  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:13.937556  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:15.937885  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:17.938008  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:20.437052  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:22.937018  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:24.937171  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:26.937444  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:28.937556  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:31.437141  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:33.437332  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:35.438037  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:37.937281  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:39.937969  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:42.436919  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:44.437329  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:46.437924  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:48.937399  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:50.938023  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:52.938100  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:55.437938  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:57.937246  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:06:59.937506  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:02.437419  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:04.937324  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:07.436974  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:09.437077  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:11.936955  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:13.937505  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:16.436626  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:18.437679  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:20.438007  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:22.938133  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:25.437713  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:27.437915  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:29.937424  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:32.437450  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:34.937171  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:36.937391  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:39.437273  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:41.438401  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:43.937009  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:45.937776  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:48.437009  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:50.437785  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:52.937852  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:55.437836  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:07:57.936991  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:00.437972  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:02.937812  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:05.437938  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:07.937728  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:09.938106  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:12.437298  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:14.937213  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:16.937551  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:19.437480  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:21.437646  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:23.937892  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:26.437152  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:28.437493  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:30.438117  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:32.937234  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:35.437427  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:37.936980  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:39.937223  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:42.437165  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:44.437447  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:46.936969  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:48.937027  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:50.938116  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:53.438051  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:55.937450  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:08:58.437220  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:00.437569  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:02.437722  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:04.437966  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:06.937695  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:09.437701  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:11.937086  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:13.937824  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:16.437931  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:18.938141  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:20.939971  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:23.437463  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:25.937377  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:28.437269  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:30.437955  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:32.937797  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:35.438015  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:37.937882  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:39.938160  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:42.437116  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:44.437854  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:46.937315  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:48.937538  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:50.937626  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:53.437642  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:55.937098  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:09:58.436982  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:00.936923  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:02.937655  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:05.438068  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:07.937548  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:10.437273  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:12.437532  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:14.937301  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:17.437163  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:19.437226  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:21.937051  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:23.937726  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:26.437253  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:28.437288  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:30.437779  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:32.937018  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:34.938104  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:37.437554  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:39.937712  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:42.437382  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:44.437743  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:46.937816  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:49.438014  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:51.937372  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:53.937840  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:56.437444  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:10:58.937084  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:00.937447  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:02.938129  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:05.437617  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:07.437667  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:09.438098  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:11.937390  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:14.437008  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:16.437976  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:18.937591  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:20.937833  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:23.437285  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:25.437412  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:27.437726  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:29.937574  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:32.437756  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:34.937450  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:37.437331  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:39.937470  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:42.436799  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:44.437513  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:46.937227  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:48.937900  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:51.436971  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:53.438014  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:55.439752  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:11:57.937708  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:00.437280  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:02.937934  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:05.437147  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:07.936981  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:10.436926  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:12.437055  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:14.437362  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:16.437991  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:18.937371  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:20.937530  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:22.937944  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:25.437991  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:27.936897  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:29.937105  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:31.937768  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:34.437874  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:36.937223  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:38.937967  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:41.437363  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:43.937464  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:46.437158  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:48.437465  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:50.437747  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:52.937856  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:55.437315  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:12:57.937279  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:00.437211  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:02.437431  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:04.938022  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:07.437698  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:09.938121  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:12.437154  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:14.437397  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:16.937072  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:18.937583  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:20.938176  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:23.437549  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:25.937311  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:27.937633  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:30.437210  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:32.437479  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:34.437554  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:36.938016  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:39.437404  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:41.437781  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:43.937221  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:45.937478  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:48.436953  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:50.437316  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:52.937412  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:54.937774  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:57.438168  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:13:59.937210  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:14:02.437333  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	W0904 22:14:04.437856  695010 node_ready.go:57] node "calico-364928" has "Ready":"False" status (will retry)
	I0904 22:14:04.935455  695010 node_ready.go:38] duration metric: took 15m0.001042752s for node "calico-364928" to be "Ready" ...
	I0904 22:14:04.937400  695010 out.go:203] 
	W0904 22:14:04.938554  695010 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W0904 22:14:04.938569  695010 out.go:285] * 
	W0904 22:14:04.940189  695010 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 22:14:04.942277  695010 out.go:203] 
	
	
	==> CRI-O <==
	Sep 04 22:15:27 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:15:27.882280867Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=6529d7c8-403b-4124-99ab-90b2c1c4703b name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:15:30 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:15:30.882416639Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=8a9a6358-04cd-42ba-91f5-801b78edee89 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:15:30 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:15:30.882648014Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=8a9a6358-04cd-42ba-91f5-801b78edee89 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:15:30 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:15:30.883274872Z" level=info msg="Pulling image: fake.domain/registry.k8s.io/echoserver:1.4" id=9edca907-4b3e-496a-b250-4a090a3a4e18 name=/runtime.v1.ImageService/PullImage
	Sep 04 22:15:31 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:15:31.047954444Z" level=info msg="Trying to access \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 04 22:15:39 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:15:39.882364955Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=906e08a0-329b-4f26-b934-467a19f998db name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:15:39 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:15:39.882623642Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=906e08a0-329b-4f26-b934-467a19f998db name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:15:45 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:15:45.882081728Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=2ad21131-665a-4973-b78a-28b0441b6b81 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:15:45 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:15:45.882377639Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=2ad21131-665a-4973-b78a-28b0441b6b81 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:15:50 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:15:50.882081693Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=80eaf82a-b43d-444c-8a35-ab05a1b7d819 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:15:50 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:15:50.882383319Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=80eaf82a-b43d-444c-8a35-ab05a1b7d819 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:16:00 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:16:00.882431203Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=d9210221-2a58-467f-9a65-1aa38e64d76c name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:16:00 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:16:00.882669743Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=d9210221-2a58-467f-9a65-1aa38e64d76c name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:16:05 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:16:05.882366904Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=647f901d-b83e-42c2-a3e8-85714609f85b name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:16:05 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:16:05.882662420Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=647f901d-b83e-42c2-a3e8-85714609f85b name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:16:11 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:16:11.882021204Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=f98d086b-c4ff-40b4-a14f-4547af99ac72 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:16:11 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:16:11.882255704Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=f98d086b-c4ff-40b4-a14f-4547af99ac72 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:16:16 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:16:16.881857776Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=e6c584ba-738e-482a-b208-0ea7fd7d731b name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:16:16 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:16:16.882124399Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=e6c584ba-738e-482a-b208-0ea7fd7d731b name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:16:22 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:16:22.881991174Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=5e43eccf-295d-4e1e-8c88-2170939a5676 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:16:22 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:16:22.882211351Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=5e43eccf-295d-4e1e-8c88-2170939a5676 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:16:30 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:16:30.882250279Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=b88b5cc7-fc80-4d1e-a9c3-32500d5bbf9f name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:16:30 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:16:30.882525193Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=b88b5cc7-fc80-4d1e-a9c3-32500d5bbf9f name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:16:33 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:16:33.883055868Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=85afeee5-7a70-413a-b0dc-ff272cb09022 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 22:16:33 default-k8s-diff-port-601847 crio[675]: time="2025-09-04 22:16:33.883339036Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=85afeee5-7a70-413a-b0dc-ff272cb09022 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	0e906c67380e6       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   About a minute ago   Exited              dashboard-metrics-scraper   8                   5ea23c6fc0ebc       dashboard-metrics-scraper-6ffb444bf9-fz65t
	19865017dd694       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago       Running             storage-provisioner         2                   bed1036522651       storage-provisioner
	fb9e4193d96f3       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   18 minutes ago       Running             coredns                     1                   5239fe21cafc6       coredns-66bc5c9577-6l9v7
	11f3a95d01801       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   18 minutes ago       Running             busybox                     1                   451e72cb34b06       busybox
	e18526edf6ba3       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   18 minutes ago       Running             kindnet-cni                 1                   ca975460210ec       kindnet-2c8sv
	9928e6b6e53c4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago       Exited              storage-provisioner         1                   bed1036522651       storage-provisioner
	b795726e2372e       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   18 minutes ago       Running             kube-proxy                  1                   b48722f5128a3       kube-proxy-zgdrw
	c085eb94106de       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   18 minutes ago       Running             kube-scheduler              1                   4d9e98de5611c       kube-scheduler-default-k8s-diff-port-601847
	16296337219c8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   18 minutes ago       Running             etcd                        1                   2ce4c093e7fd7       etcd-default-k8s-diff-port-601847
	0ff5410f92b61       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   18 minutes ago       Running             kube-controller-manager     1                   63ef34691dbfc       kube-controller-manager-default-k8s-diff-port-601847
	009da3d5b4890       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   18 minutes ago       Running             kube-apiserver              1                   4301395da7fc0       kube-apiserver-default-k8s-diff-port-601847
	
	
	==> coredns [fb9e4193d96f31c22cb27f97cf797a4a64b14bbcbb1648abc8512b4b3e07fc81] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57779 - 32163 "HINFO IN 648113122663838148.1941072893311661962. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.458609592s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-601847
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-601847
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d82f852837f628b3930700b19196c39855cd258a
	                    minikube.k8s.io/name=default-k8s-diff-port-601847
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_04T21_56_36_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Sep 2025 21:56:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-601847
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Sep 2025 22:16:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Sep 2025 22:16:20 +0000   Thu, 04 Sep 2025 21:56:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Sep 2025 22:16:20 +0000   Thu, 04 Sep 2025 21:56:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Sep 2025 22:16:20 +0000   Thu, 04 Sep 2025 21:56:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Sep 2025 22:16:20 +0000   Thu, 04 Sep 2025 21:57:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-601847
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	System Info:
	  Machine ID:                 f5df31fce7394b4db986c14ce48081e1
	  System UUID:                202b9b21-4e85-489b-b9fa-c1acfe66ebb3
	  Boot ID:                    d34ed5fc-a148-45de-9a0e-f744d5f792e8
	  Kernel Version:             5.15.0-1083-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-66bc5c9577-6l9v7                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     20m
	  kube-system                 etcd-default-k8s-diff-port-601847                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         20m
	  kube-system                 kindnet-2c8sv                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      20m
	  kube-system                 kube-apiserver-default-k8s-diff-port-601847             250m (3%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-601847    200m (2%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-proxy-zgdrw                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-scheduler-default-k8s-diff-port-601847             100m (1%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 metrics-server-746fcd58dc-k7j78                         100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         19m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fz65t              0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-22q8g                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 20m                kube-proxy       
	  Normal   Starting                 18m                kube-proxy       
	  Normal   Starting                 20m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 20m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-601847 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-601847 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-601847 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    20m                kubelet          Node default-k8s-diff-port-601847 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 20m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  20m                kubelet          Node default-k8s-diff-port-601847 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     20m                kubelet          Node default-k8s-diff-port-601847 status is now: NodeHasSufficientPID
	  Normal   Starting                 20m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           20m                node-controller  Node default-k8s-diff-port-601847 event: Registered Node default-k8s-diff-port-601847 in Controller
	  Normal   NodeReady                19m                kubelet          Node default-k8s-diff-port-601847 status is now: NodeReady
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 18m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-601847 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-601847 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-601847 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           18m                node-controller  Node default-k8s-diff-port-601847 event: Registered Node default-k8s-diff-port-601847 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4e b5 06 e3 98 d4 08 06
	[ +11.067174] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 52 05 50 71 c8 97 08 06
	[  +0.000348] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 4e 9f 60 b8 d0 a4 08 06
	[Sep 4 22:00] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae c6 57 b4 5a ac 08 06
	[  +0.000332] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 7a 52 2a 9d 32 91 08 06
	[Sep 4 22:01] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca c1 7c bd 85 07 08 06
	[  +7.691011] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff da 20 b4 1f 35 71 08 06
	[  +0.517474] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff da 20 b4 1f 35 71 08 06
	[  +0.000824] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ea 94 02 98 e7 7a 08 06
	[  +9.031118] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e af ce f6 73 03 08 06
	[  +0.000308] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca c1 7c bd 85 07 08 06
	[ +32.638428] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8e e9 e7 47 0d 5c 08 06
	[  +0.000352] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff da 20 b4 1f 35 71 08 06
	
	
	==> etcd [16296337219c89f4129b435f9353f666fdd58ec04339099ecc4bb3f392a9c763] <==
	{"level":"warn","ts":"2025-09-04T21:57:56.450174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:57:56.457494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:57:56.464972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:57:56.471040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:57:56.477392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:57:56.485270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:57:56.491953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:57:56.550911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:57:56.558161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:57:56.565259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:57:56.571620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:57:56.600051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:57:56.645059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:57:56.651882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T21:57:56.754245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45464","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-04T21:58:37.656882Z","caller":"traceutil/trace.go:172","msg":"trace[988814331] transaction","detail":"{read_only:false; response_revision:689; number_of_response:1; }","duration":"170.493866ms","start":"2025-09-04T21:58:37.486243Z","end":"2025-09-04T21:58:37.656737Z","steps":["trace[988814331] 'process raft request'  (duration: 99.016179ms)","trace[988814331] 'compare'  (duration: 71.313612ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-04T21:58:52.066545Z","caller":"traceutil/trace.go:172","msg":"trace[1350173158] transaction","detail":"{read_only:false; response_revision:719; number_of_response:1; }","duration":"177.911555ms","start":"2025-09-04T21:58:51.888611Z","end":"2025-09-04T21:58:52.066523Z","steps":["trace[1350173158] 'process raft request'  (duration: 177.806952ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T22:00:45.291878Z","caller":"traceutil/trace.go:172","msg":"trace[1368556805] transaction","detail":"{read_only:false; response_revision:855; number_of_response:1; }","duration":"104.708392ms","start":"2025-09-04T22:00:45.187147Z","end":"2025-09-04T22:00:45.291856Z","steps":["trace[1368556805] 'process raft request'  (duration: 104.538694ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-04T22:00:46.225101Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.084458ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638355079021129573 > lease_revoke:<id:59069916bc7aef07>","response":"size:28"}
	{"level":"info","ts":"2025-09-04T22:07:55.676983Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1010}
	{"level":"info","ts":"2025-09-04T22:07:55.695861Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1010,"took":"18.523377ms","hash":3166299399,"current-db-size-bytes":3203072,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1339392,"current-db-size-in-use":"1.3 MB"}
	{"level":"info","ts":"2025-09-04T22:07:55.695917Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3166299399,"revision":1010,"compact-revision":-1}
	{"level":"info","ts":"2025-09-04T22:12:55.682806Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1309}
	{"level":"info","ts":"2025-09-04T22:12:55.685611Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1309,"took":"2.459902ms","hash":721489556,"current-db-size-bytes":3203072,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1925120,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-09-04T22:12:55.685642Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":721489556,"revision":1309,"compact-revision":1010}
	
	
	==> kernel <==
	 22:16:43 up  3:59,  0 users,  load average: 0.26, 0.31, 1.07
	Linux default-k8s-diff-port-601847 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [e18526edf6ba391aabf631cee54d35ca7c972438d099f56f4a4c1145e634e4f8] <==
	I0904 22:14:39.556471       1 main.go:301] handling current node
	I0904 22:14:49.557579       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 22:14:49.557616       1 main.go:301] handling current node
	I0904 22:14:59.556872       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 22:14:59.556901       1 main.go:301] handling current node
	I0904 22:15:09.560855       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 22:15:09.560894       1 main.go:301] handling current node
	I0904 22:15:19.556873       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 22:15:19.556905       1 main.go:301] handling current node
	I0904 22:15:29.558205       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 22:15:29.558237       1 main.go:301] handling current node
	I0904 22:15:39.557470       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 22:15:39.557508       1 main.go:301] handling current node
	I0904 22:15:49.557468       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 22:15:49.557496       1 main.go:301] handling current node
	I0904 22:15:59.555457       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 22:15:59.555494       1 main.go:301] handling current node
	I0904 22:16:09.560833       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 22:16:09.560866       1 main.go:301] handling current node
	I0904 22:16:19.557195       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 22:16:19.557224       1 main.go:301] handling current node
	I0904 22:16:29.557142       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 22:16:29.557181       1 main.go:301] handling current node
	I0904 22:16:39.554623       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 22:16:39.554657       1 main.go:301] handling current node
	
	
	==> kube-apiserver [009da3d5b4890abf829a2b06e9ed211e4f39a80fde2b69482cbfddafbed269a3] <==
	I0904 22:12:58.686194       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0904 22:13:06.410986       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 22:13:40.171234       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0904 22:13:58.685672       1 handler_proxy.go:99] no RequestInfo found in the context
	E0904 22:13:58.685721       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0904 22:13:58.685735       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0904 22:13:58.686836       1 handler_proxy.go:99] no RequestInfo found in the context
	E0904 22:13:58.686889       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0904 22:13:58.686900       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0904 22:14:36.113000       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 22:14:50.029866       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0904 22:15:58.686878       1 handler_proxy.go:99] no RequestInfo found in the context
	E0904 22:15:58.686927       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0904 22:15:58.686943       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0904 22:15:58.687014       1 handler_proxy.go:99] no RequestInfo found in the context
	E0904 22:15:58.687132       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0904 22:15:58.687961       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0904 22:16:03.916805       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 22:16:19.132939       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [0ff5410f92b61c2b92597a4296a621756c4875edd4d63dd8954d55b3c17e657b] <==
	I0904 22:10:33.462447       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 22:11:03.392581       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 22:11:03.468179       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 22:11:33.395906       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 22:11:33.474720       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 22:12:03.401135       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 22:12:03.481563       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 22:12:33.405405       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 22:12:33.488031       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 22:13:03.409193       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 22:13:03.494756       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 22:13:33.412935       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 22:13:33.501062       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 22:14:03.418665       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 22:14:03.508529       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 22:14:33.423208       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 22:14:33.514625       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 22:15:03.427211       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 22:15:03.521210       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 22:15:33.431339       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 22:15:33.527825       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 22:16:03.436444       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 22:16:03.534355       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 22:16:33.440729       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 22:16:33.541281       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [b795726e2372e2c2155e88b5fbd736497e9121c883d31f583086c1e1d48edd92] <==
	I0904 21:57:59.352190       1 server_linux.go:53] "Using iptables proxy"
	I0904 21:57:59.578698       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0904 21:57:59.681074       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0904 21:57:59.681111       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0904 21:57:59.681223       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 21:57:59.761516       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0904 21:57:59.761574       1 server_linux.go:132] "Using iptables Proxier"
	I0904 21:57:59.766870       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 21:57:59.767298       1 server.go:527] "Version info" version="v1.34.0"
	I0904 21:57:59.767339       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 21:57:59.768586       1 config.go:200] "Starting service config controller"
	I0904 21:57:59.768612       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0904 21:57:59.768644       1 config.go:106] "Starting endpoint slice config controller"
	I0904 21:57:59.768662       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0904 21:57:59.768685       1 config.go:309] "Starting node config controller"
	I0904 21:57:59.768691       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0904 21:57:59.768698       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0904 21:57:59.769108       1 config.go:403] "Starting serviceCIDR config controller"
	I0904 21:57:59.769196       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0904 21:57:59.869220       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0904 21:57:59.869242       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0904 21:57:59.869274       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [c085eb94106dee9d8191474057cf3151b866fdfdfff4f11c0ecd257b24b464a1] <==
	I0904 21:57:55.764093       1 serving.go:386] Generated self-signed cert in-memory
	W0904 21:57:57.660706       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0904 21:57:57.660827       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0904 21:57:57.660867       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0904 21:57:57.660902       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0904 21:57:57.852123       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0904 21:57:57.852158       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 21:57:57.856441       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 21:57:57.856591       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0904 21:57:57.856724       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0904 21:57:57.856612       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 21:57:57.958069       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 04 22:15:54 default-k8s-diff-port-601847 kubelet[823]: E0904 22:15:54.102563     823 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757024154102355422  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 22:15:54 default-k8s-diff-port-601847 kubelet[823]: E0904 22:15:54.102602     823 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757024154102355422  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 22:15:59 default-k8s-diff-port-601847 kubelet[823]: I0904 22:15:59.882199     823 scope.go:117] "RemoveContainer" containerID="0e906c67380e6488f78350a83ba8ccad6ce2433edf650031e02897ecccf4621f"
	Sep 04 22:15:59 default-k8s-diff-port-601847 kubelet[823]: E0904 22:15:59.882456     823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fz65t_kubernetes-dashboard(32237f00-2ebe-43ce-89f8-a1e2e4c7a598)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fz65t" podUID="32237f00-2ebe-43ce-89f8-a1e2e4c7a598"
	Sep 04 22:16:00 default-k8s-diff-port-601847 kubelet[823]: E0904 22:16:00.883037     823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-k7j78" podUID="4487a876-7f24-447b-afec-505bc3d62dbb"
	Sep 04 22:16:04 default-k8s-diff-port-601847 kubelet[823]: E0904 22:16:04.103669     823 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757024164103452575  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 22:16:04 default-k8s-diff-port-601847 kubelet[823]: E0904 22:16:04.103717     823 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757024164103452575  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 22:16:05 default-k8s-diff-port-601847 kubelet[823]: E0904 22:16:05.882956     823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-22q8g" podUID="6e7da225-bc40-402a-aacd-963133c9e211"
	Sep 04 22:16:11 default-k8s-diff-port-601847 kubelet[823]: E0904 22:16:11.882561     823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-k7j78" podUID="4487a876-7f24-447b-afec-505bc3d62dbb"
	Sep 04 22:16:14 default-k8s-diff-port-601847 kubelet[823]: E0904 22:16:14.104836     823 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757024174104583952  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 22:16:14 default-k8s-diff-port-601847 kubelet[823]: E0904 22:16:14.104877     823 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757024174104583952  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 22:16:14 default-k8s-diff-port-601847 kubelet[823]: I0904 22:16:14.881693     823 scope.go:117] "RemoveContainer" containerID="0e906c67380e6488f78350a83ba8ccad6ce2433edf650031e02897ecccf4621f"
	Sep 04 22:16:14 default-k8s-diff-port-601847 kubelet[823]: E0904 22:16:14.881936     823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fz65t_kubernetes-dashboard(32237f00-2ebe-43ce-89f8-a1e2e4c7a598)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fz65t" podUID="32237f00-2ebe-43ce-89f8-a1e2e4c7a598"
	Sep 04 22:16:16 default-k8s-diff-port-601847 kubelet[823]: E0904 22:16:16.882493     823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-22q8g" podUID="6e7da225-bc40-402a-aacd-963133c9e211"
	Sep 04 22:16:22 default-k8s-diff-port-601847 kubelet[823]: E0904 22:16:22.882550     823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-k7j78" podUID="4487a876-7f24-447b-afec-505bc3d62dbb"
	Sep 04 22:16:24 default-k8s-diff-port-601847 kubelet[823]: E0904 22:16:24.105910     823 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757024184105661655  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 22:16:24 default-k8s-diff-port-601847 kubelet[823]: E0904 22:16:24.105951     823 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757024184105661655  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 22:16:29 default-k8s-diff-port-601847 kubelet[823]: I0904 22:16:29.882136     823 scope.go:117] "RemoveContainer" containerID="0e906c67380e6488f78350a83ba8ccad6ce2433edf650031e02897ecccf4621f"
	Sep 04 22:16:29 default-k8s-diff-port-601847 kubelet[823]: E0904 22:16:29.882378     823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fz65t_kubernetes-dashboard(32237f00-2ebe-43ce-89f8-a1e2e4c7a598)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fz65t" podUID="32237f00-2ebe-43ce-89f8-a1e2e4c7a598"
	Sep 04 22:16:30 default-k8s-diff-port-601847 kubelet[823]: E0904 22:16:30.882849     823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-22q8g" podUID="6e7da225-bc40-402a-aacd-963133c9e211"
	Sep 04 22:16:33 default-k8s-diff-port-601847 kubelet[823]: E0904 22:16:33.883632     823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-k7j78" podUID="4487a876-7f24-447b-afec-505bc3d62dbb"
	Sep 04 22:16:34 default-k8s-diff-port-601847 kubelet[823]: E0904 22:16:34.107783     823 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757024194107538655  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 22:16:34 default-k8s-diff-port-601847 kubelet[823]: E0904 22:16:34.107823     823 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757024194107538655  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 22:16:42 default-k8s-diff-port-601847 kubelet[823]: I0904 22:16:42.881754     823 scope.go:117] "RemoveContainer" containerID="0e906c67380e6488f78350a83ba8ccad6ce2433edf650031e02897ecccf4621f"
	Sep 04 22:16:42 default-k8s-diff-port-601847 kubelet[823]: E0904 22:16:42.881992     823 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fz65t_kubernetes-dashboard(32237f00-2ebe-43ce-89f8-a1e2e4c7a598)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fz65t" podUID="32237f00-2ebe-43ce-89f8-a1e2e4c7a598"
	
	
	==> storage-provisioner [19865017dd69447d35978e2ded9b2720c45b59ad417ef057f3bbf96a2ddd64c1] <==
	W0904 22:16:18.679720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:16:20.682887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:16:20.686775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:16:22.690294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:16:22.693979       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:16:24.697256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:16:24.700909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:16:26.703672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:16:26.708509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:16:28.710835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:16:28.714847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:16:30.717362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:16:30.722157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:16:32.724711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:16:32.728373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:16:34.731285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:16:34.736031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:16:36.738520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:16:36.742711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:16:38.745834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:16:38.749472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:16:40.753225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:16:40.757108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:16:42.760629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 22:16:42.766008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [9928e6b6e53c41714e74f689852a89559605d108c2329637e53f78886041722d] <==
	I0904 21:57:59.065783       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0904 21:58:29.068218       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-601847 -n default-k8s-diff-port-601847
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-601847 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-k7j78 kubernetes-dashboard-855c9754f9-22q8g
helpers_test.go:282: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-601847 describe pod metrics-server-746fcd58dc-k7j78 kubernetes-dashboard-855c9754f9-22q8g
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-601847 describe pod metrics-server-746fcd58dc-k7j78 kubernetes-dashboard-855c9754f9-22q8g: exit status 1 (55.744134ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-k7j78" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-22q8g" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context default-k8s-diff-port-601847 describe pod metrics-server-746fcd58dc-k7j78 kubernetes-dashboard-855c9754f9-22q8g: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.36s)

                                                
                                    

Test pass (282/325)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.33
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.2
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.0/json-events 4.41
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.06
18 TestDownloadOnly/v1.34.0/DeleteAll 0.2
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 1.11
21 TestBinaryMirror 0.77
22 TestOffline 91.04
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 155.64
31 TestAddons/serial/GCPAuth/Namespaces 0.11
32 TestAddons/serial/GCPAuth/FakeCredentials 9.42
35 TestAddons/parallel/Registry 40.05
36 TestAddons/parallel/RegistryCreds 0.87
38 TestAddons/parallel/InspektorGadget 6.25
39 TestAddons/parallel/MetricsServer 6.79
42 TestAddons/parallel/Headlamp 48.01
43 TestAddons/parallel/CloudSpanner 5.45
45 TestAddons/parallel/NvidiaDevicePlugin 5.43
46 TestAddons/parallel/Yakd 11.05
47 TestAddons/parallel/AmdGpuDevicePlugin 5.44
48 TestAddons/StoppedEnableDisable 12.07
49 TestCertOptions 28.83
50 TestCertExpiration 225.56
52 TestForceSystemdFlag 26.93
53 TestForceSystemdEnv 26.24
55 TestKVMDriverInstallOrUpdate 2.91
59 TestErrorSpam/setup 24.18
60 TestErrorSpam/start 0.56
61 TestErrorSpam/status 0.83
62 TestErrorSpam/pause 1.48
63 TestErrorSpam/unpause 1.63
64 TestErrorSpam/stop 1.35
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 68.71
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 27.51
71 TestFunctional/serial/KubeContext 0.04
72 TestFunctional/serial/KubectlGetPods 0.06
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.99
76 TestFunctional/serial/CacheCmd/cache/add_local 1.25
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.67
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
84 TestFunctional/serial/ExtraConfig 37.9
85 TestFunctional/serial/ComponentHealth 0.06
86 TestFunctional/serial/LogsCmd 1.31
87 TestFunctional/serial/LogsFileCmd 1.31
88 TestFunctional/serial/InvalidService 4.47
90 TestFunctional/parallel/ConfigCmd 0.38
92 TestFunctional/parallel/DryRun 0.33
93 TestFunctional/parallel/InternationalLanguage 0.15
94 TestFunctional/parallel/StatusCmd 0.87
99 TestFunctional/parallel/AddonsCmd 0.12
102 TestFunctional/parallel/SSHCmd 0.55
103 TestFunctional/parallel/CpCmd 1.62
105 TestFunctional/parallel/FileSync 0.26
106 TestFunctional/parallel/CertSync 1.63
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.55
114 TestFunctional/parallel/License 0.44
115 TestFunctional/parallel/Version/short 0.05
116 TestFunctional/parallel/Version/components 0.45
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.2
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
121 TestFunctional/parallel/ImageCommands/ImageBuild 2.02
122 TestFunctional/parallel/ImageCommands/Setup 1.38
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.39
128 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.4
129 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
132 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.86
133 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.22
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.47
135 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.72
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.51
142 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
144 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
145 TestFunctional/parallel/ProfileCmd/profile_list 0.35
146 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
147 TestFunctional/parallel/MountCmd/any-port 83.49
148 TestFunctional/parallel/MountCmd/specific-port 1.65
149 TestFunctional/parallel/MountCmd/VerifyCleanup 1.77
150 TestFunctional/parallel/ServiceCmd/List 1.67
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.67
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 179.8
163 TestMultiControlPlane/serial/DeployApp 4.03
164 TestMultiControlPlane/serial/PingHostFromPods 1.03
165 TestMultiControlPlane/serial/AddWorkerNode 54.68
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.8
168 TestMultiControlPlane/serial/CopyFile 15.14
169 TestMultiControlPlane/serial/StopSecondaryNode 12.48
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.64
171 TestMultiControlPlane/serial/RestartSecondaryNode 20.91
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.81
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 120.88
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.24
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.63
176 TestMultiControlPlane/serial/StopCluster 35.52
177 TestMultiControlPlane/serial/RestartCluster 59.65
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.61
179 TestMultiControlPlane/serial/AddSecondaryNode 67.08
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.8
184 TestJSONOutput/start/Command 71.43
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Command 0.64
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Command 0.56
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 5.72
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.2
209 TestKicCustomNetwork/create_custom_network 29.95
210 TestKicCustomNetwork/use_default_bridge_network 26.71
211 TestKicExistingNetwork 23.24
212 TestKicCustomSubnet 27.12
213 TestKicStaticIP 24.12
214 TestMainNoArgs 0.05
215 TestMinikubeProfile 51.29
218 TestMountStart/serial/StartWithMountFirst 5.46
219 TestMountStart/serial/VerifyMountFirst 0.23
220 TestMountStart/serial/StartWithMountSecond 5.22
221 TestMountStart/serial/VerifyMountSecond 0.23
222 TestMountStart/serial/DeleteFirst 1.59
223 TestMountStart/serial/VerifyMountPostDelete 0.24
224 TestMountStart/serial/Stop 1.17
225 TestMountStart/serial/RestartStopped 7.33
226 TestMountStart/serial/VerifyMountPostStop 0.23
229 TestMultiNode/serial/FreshStart2Nodes 125.54
230 TestMultiNode/serial/DeployApp2Nodes 3.9
231 TestMultiNode/serial/PingHostFrom2Pods 0.71
232 TestMultiNode/serial/AddNode 56.9
233 TestMultiNode/serial/MultiNodeLabels 0.06
234 TestMultiNode/serial/ProfileList 0.58
235 TestMultiNode/serial/CopyFile 8.51
236 TestMultiNode/serial/StopNode 2.04
237 TestMultiNode/serial/StartAfterStop 7.21
238 TestMultiNode/serial/RestartKeepsNodes 69.27
239 TestMultiNode/serial/DeleteNode 5.1
240 TestMultiNode/serial/StopMultiNode 23.7
241 TestMultiNode/serial/RestartMultiNode 44.38
242 TestMultiNode/serial/ValidateNameConflict 26.62
247 TestPreload 109.99
249 TestScheduledStopUnix 99.4
252 TestInsufficientStorage 9.8
253 TestRunningBinaryUpgrade 66.03
255 TestKubernetesUpgrade 341.17
256 TestMissingContainerUpgrade 65.29
257 TestStoppedBinaryUpgrade/Setup 0.55
258 TestStoppedBinaryUpgrade/Upgrade 64.44
259 TestStoppedBinaryUpgrade/MinikubeLogs 1.04
268 TestPause/serial/Start 74.92
270 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
271 TestNoKubernetes/serial/StartWithK8s 29.44
279 TestNetworkPlugins/group/false 3.54
280 TestNoKubernetes/serial/StartWithStopK8s 5.67
284 TestNoKubernetes/serial/Start 6.7
285 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
286 TestNoKubernetes/serial/ProfileList 1.89
287 TestNoKubernetes/serial/Stop 1.19
288 TestNoKubernetes/serial/StartNoArgs 6.13
289 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
290 TestPause/serial/SecondStartNoReconfiguration 17.11
291 TestPause/serial/Pause 0.74
292 TestPause/serial/VerifyStatus 0.28
293 TestPause/serial/Unpause 0.61
294 TestPause/serial/PauseAgain 0.82
295 TestPause/serial/DeletePaused 2.67
296 TestPause/serial/VerifyDeletedResources 16.33
298 TestStartStop/group/old-k8s-version/serial/FirstStart 54.32
299 TestStartStop/group/old-k8s-version/serial/DeployApp 8.26
300 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.02
302 TestStartStop/group/no-preload/serial/FirstStart 61.56
303 TestStartStop/group/old-k8s-version/serial/Stop 13.08
304 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
305 TestStartStop/group/old-k8s-version/serial/SecondStart 47.56
306 TestStartStop/group/no-preload/serial/DeployApp 9.3
307 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
308 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
309 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.88
310 TestStartStop/group/no-preload/serial/Stop 12.17
311 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
312 TestStartStop/group/old-k8s-version/serial/Pause 2.64
314 TestStartStop/group/embed-certs/serial/FirstStart 71.37
315 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
316 TestStartStop/group/no-preload/serial/SecondStart 49.04
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 72.94
320 TestStartStop/group/newest-cni/serial/FirstStart 27.55
321 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
322 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
323 TestStartStop/group/newest-cni/serial/DeployApp 0
324 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.84
325 TestStartStop/group/newest-cni/serial/Stop 1.19
326 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
327 TestStartStop/group/newest-cni/serial/SecondStart 13.46
328 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
329 TestStartStop/group/no-preload/serial/Pause 2.58
330 TestStartStop/group/embed-certs/serial/DeployApp 8.28
331 TestNetworkPlugins/group/auto/Start 72.48
332 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
333 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
334 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
335 TestStartStop/group/newest-cni/serial/Pause 2.75
336 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.88
337 TestStartStop/group/embed-certs/serial/Stop 11.91
338 TestNetworkPlugins/group/kindnet/Start 73.68
339 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
340 TestStartStop/group/embed-certs/serial/SecondStart 51.97
341 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.32
342 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.16
343 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.85
344 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
345 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 52.18
346 TestNetworkPlugins/group/auto/KubeletFlags 0.25
347 TestNetworkPlugins/group/auto/NetCatPod 10.19
348 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
349 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
350 TestNetworkPlugins/group/auto/DNS 0.12
351 TestNetworkPlugins/group/auto/Localhost 0.13
352 TestNetworkPlugins/group/auto/HairPin 0.1
353 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.21
354 TestStartStop/group/embed-certs/serial/Pause 2.56
355 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.33
358 TestNetworkPlugins/group/kindnet/NetCatPod 12.26
360 TestNetworkPlugins/group/custom-flannel/Start 48.53
361 TestNetworkPlugins/group/kindnet/DNS 0.15
362 TestNetworkPlugins/group/kindnet/Localhost 0.11
363 TestNetworkPlugins/group/kindnet/HairPin 0.12
364 TestNetworkPlugins/group/enable-default-cni/Start 65.55
365 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
366 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.17
367 TestNetworkPlugins/group/custom-flannel/DNS 0.12
368 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
369 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
370 TestNetworkPlugins/group/flannel/Start 61.66
371 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
372 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.2
373 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
374 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
375 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
376 TestNetworkPlugins/group/bridge/Start 62.66
377 TestNetworkPlugins/group/flannel/ControllerPod 6.01
378 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
379 TestNetworkPlugins/group/flannel/NetCatPod 8.16
380 TestNetworkPlugins/group/flannel/DNS 0.12
381 TestNetworkPlugins/group/flannel/Localhost 0.1
382 TestNetworkPlugins/group/flannel/HairPin 0.1
383 TestNetworkPlugins/group/bridge/KubeletFlags 0.25
384 TestNetworkPlugins/group/bridge/NetCatPod 8.17
385 TestNetworkPlugins/group/bridge/DNS 0.18
386 TestNetworkPlugins/group/bridge/Localhost 0.1
387 TestNetworkPlugins/group/bridge/HairPin 0.1
389 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.21
390 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.44
x
+
TestDownloadOnly/v1.28.0/json-events (5.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-640345 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-640345 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.325308632s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0904 20:55:28.523530  388360 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0904 20:55:28.523670  388360 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21490-384635/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-640345
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-640345: exit status 85 (61.288627ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-640345 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-640345 │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 20:55:23
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 20:55:23.239030  388372 out.go:360] Setting OutFile to fd 1 ...
	I0904 20:55:23.239299  388372 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 20:55:23.239307  388372 out.go:374] Setting ErrFile to fd 2...
	I0904 20:55:23.239311  388372 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 20:55:23.239529  388372 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-384635/.minikube/bin
	W0904 20:55:23.239650  388372 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21490-384635/.minikube/config/config.json: open /home/jenkins/minikube-integration/21490-384635/.minikube/config/config.json: no such file or directory
	I0904 20:55:23.240224  388372 out.go:368] Setting JSON to true
	I0904 20:55:23.241197  388372 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9472,"bootTime":1757009851,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 20:55:23.241252  388372 start.go:140] virtualization: kvm guest
	I0904 20:55:23.243336  388372 out.go:99] [download-only-640345] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	W0904 20:55:23.243463  388372 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21490-384635/.minikube/cache/preloaded-tarball: no such file or directory
	I0904 20:55:23.243512  388372 notify.go:220] Checking for updates...
	I0904 20:55:23.244790  388372 out.go:171] MINIKUBE_LOCATION=21490
	I0904 20:55:23.246203  388372 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 20:55:23.247498  388372 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21490-384635/kubeconfig
	I0904 20:55:23.248676  388372 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-384635/.minikube
	I0904 20:55:23.249744  388372 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0904 20:55:23.251728  388372 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0904 20:55:23.251956  388372 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 20:55:23.274434  388372 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 20:55:23.274547  388372 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 20:55:23.318961  388372 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:62 SystemTime:2025-09-04 20:55:23.310230079 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 20:55:23.319076  388372 docker.go:318] overlay module found
	I0904 20:55:23.320747  388372 out.go:99] Using the docker driver based on user configuration
	I0904 20:55:23.320813  388372 start.go:304] selected driver: docker
	I0904 20:55:23.320824  388372 start.go:918] validating driver "docker" against <nil>
	I0904 20:55:23.320996  388372 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 20:55:23.367897  388372 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:62 SystemTime:2025-09-04 20:55:23.359230121 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 20:55:23.368093  388372 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0904 20:55:23.368663  388372 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0904 20:55:23.368865  388372 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0904 20:55:23.370668  388372 out.go:171] Using Docker driver with root privileges
	I0904 20:55:23.371786  388372 cni.go:84] Creating CNI manager for ""
	I0904 20:55:23.371867  388372 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 20:55:23.371880  388372 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0904 20:55:23.371954  388372 start.go:348] cluster config:
	{Name:download-only-640345 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-640345 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 20:55:23.373101  388372 out.go:99] Starting "download-only-640345" primary control-plane node in "download-only-640345" cluster
	I0904 20:55:23.373126  388372 cache.go:123] Beginning downloading kic base image for docker with crio
	I0904 20:55:23.374229  388372 out.go:99] Pulling base image v0.0.47-1756116447-21413 ...
	I0904 20:55:23.374255  388372 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0904 20:55:23.374380  388372 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 in local docker daemon
	I0904 20:55:23.390183  388372 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 to local cache
	I0904 20:55:23.390389  388372 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 in local cache directory
	I0904 20:55:23.390489  388372 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 to local cache
	I0904 20:55:23.401307  388372 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0904 20:55:23.401333  388372 cache.go:58] Caching tarball of preloaded images
	I0904 20:55:23.401493  388372 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0904 20:55:23.403083  388372 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0904 20:55:23.403100  388372 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 ...
	I0904 20:55:23.433440  388372 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21490-384635/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0904 20:55:26.929546  388372 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 ...
	I0904 20:55:26.929632  388372 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21490-384635/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 ...
	I0904 20:55:27.203498  388372 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 as a tarball
	I0904 20:55:27.812501  388372 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I0904 20:55:27.812877  388372 profile.go:143] Saving config to /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/download-only-640345/config.json ...
	I0904 20:55:27.812916  388372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/download-only-640345/config.json: {Name:mk45cb15c53f1047ae9cb7430076fde0060f48f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:55:27.813078  388372 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0904 20:55:27.813255  388372 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21490-384635/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-640345 host does not exist
	  To start a cluster, run: "minikube start -p download-only-640345"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-640345
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (4.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-807406 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-807406 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.414208709s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (4.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0904 20:55:33.326398  388360 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
I0904 20:55:33.326443  388360 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21490-384635/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-807406
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-807406: exit status 85 (62.375613ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-640345 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-640345 │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │ 04 Sep 25 20:55 UTC │
	│ delete  │ -p download-only-640345                                                                                                                                                   │ download-only-640345 │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │ 04 Sep 25 20:55 UTC │
	│ start   │ -o=json --download-only -p download-only-807406 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-807406 │ jenkins │ v1.36.0 │ 04 Sep 25 20:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 20:55:28
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 20:55:28.950806  388717 out.go:360] Setting OutFile to fd 1 ...
	I0904 20:55:28.951312  388717 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 20:55:28.951358  388717 out.go:374] Setting ErrFile to fd 2...
	I0904 20:55:28.951375  388717 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 20:55:28.951837  388717 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-384635/.minikube/bin
	I0904 20:55:28.952848  388717 out.go:368] Setting JSON to true
	I0904 20:55:28.953706  388717 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9478,"bootTime":1757009851,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 20:55:28.953798  388717 start.go:140] virtualization: kvm guest
	I0904 20:55:28.955390  388717 out.go:99] [download-only-807406] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 20:55:28.955567  388717 notify.go:220] Checking for updates...
	I0904 20:55:28.956528  388717 out.go:171] MINIKUBE_LOCATION=21490
	I0904 20:55:28.957729  388717 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 20:55:28.958961  388717 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21490-384635/kubeconfig
	I0904 20:55:28.960050  388717 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-384635/.minikube
	I0904 20:55:28.961156  388717 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0904 20:55:28.963222  388717 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0904 20:55:28.963429  388717 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 20:55:28.984669  388717 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 20:55:28.984728  388717 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 20:55:29.029627  388717 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:50 SystemTime:2025-09-04 20:55:29.020805696 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 20:55:29.029803  388717 docker.go:318] overlay module found
	I0904 20:55:29.031792  388717 out.go:99] Using the docker driver based on user configuration
	I0904 20:55:29.031828  388717 start.go:304] selected driver: docker
	I0904 20:55:29.031836  388717 start.go:918] validating driver "docker" against <nil>
	I0904 20:55:29.031912  388717 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 20:55:29.077584  388717 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:50 SystemTime:2025-09-04 20:55:29.068964732 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 20:55:29.077789  388717 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0904 20:55:29.078306  388717 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0904 20:55:29.078485  388717 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0904 20:55:29.080167  388717 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-807406 host does not exist
	  To start a cluster, run: "minikube start -p download-only-807406"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-807406
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.11s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-306069 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-306069" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-306069
--- PASS: TestDownloadOnlyKic (1.11s)

                                                
                                    
x
+
TestBinaryMirror (0.77s)

                                                
                                                
=== RUN   TestBinaryMirror
I0904 20:55:35.071019  388360 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-563304 --alsologtostderr --binary-mirror http://127.0.0.1:41655 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-563304" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-563304
--- PASS: TestBinaryMirror (0.77s)

                                                
                                    
x
+
TestOffline (91.04s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-788504 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-788504 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m28.688006565s)
helpers_test.go:175: Cleaning up "offline-crio-788504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-788504
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-788504: (2.349230073s)
--- PASS: TestOffline (91.04s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-049370
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-049370: exit status 85 (50.862918ms)

                                                
                                                
-- stdout --
	* Profile "addons-049370" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-049370"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-049370
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-049370: exit status 85 (51.841182ms)

                                                
                                                
-- stdout --
	* Profile "addons-049370" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-049370"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (155.64s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-049370 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-049370 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m35.640644397s)
--- PASS: TestAddons/Setup (155.64s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-049370 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-049370 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.42s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-049370 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-049370 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f06b0177-ed93-4b45-a714-cffa2245a8a7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f06b0177-ed93-4b45-a714-cffa2245a8a7] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003316532s
addons_test.go:694: (dbg) Run:  kubectl --context addons-049370 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-049370 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-049370 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.42s)

                                                
                                    
x
+
TestAddons/parallel/Registry (40.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 4.007537ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-2lbsx" [7bb317fd-9e98-41e6-a9bd-15826d701411] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003183062s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-84zdv" [83f3d976-f7e6-44f0-a75e-408cd4584cfd] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003325491s
addons_test.go:392: (dbg) Run:  kubectl --context addons-049370 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-049370 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-049370 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (28.326551639s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-049370 ip
2025/09/04 20:59:09 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-049370 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (40.05s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.87s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 51.975937ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-049370
addons_test.go:332: (dbg) Run:  kubectl --context addons-049370 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-049370 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.87s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-whkft" [1da195ec-23be-43b1-bff4-9d310ecd7c8d] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00294015s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-049370 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.79s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.372465ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-kgh4z" [85361692-2630-41a0-bd24-f432d8ed4a23] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003105333s
addons_test.go:463: (dbg) Run:  kubectl --context addons-049370 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-049370 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.79s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (48.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-049370 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-049370 --alsologtostderr -v=1: (1.41459172s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6f46646d79-l9jqj" [8588da7d-a51f-4606-82d4-aab395812199] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6f46646d79-l9jqj" [8588da7d-a51f-4606-82d4-aab395812199] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6f46646d79-l9jqj" [8588da7d-a51f-4606-82d4-aab395812199] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 41.00346262s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-049370 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-049370 addons disable headlamp --alsologtostderr -v=1: (5.587956168s)
--- PASS: TestAddons/parallel/Headlamp (48.01s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.45s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-c55d4cb6d-hjq5p" [ef4e9bb8-92fe-4b29-814d-4d44f3160e16] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003387113s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-049370 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.45s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.43s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-kqhfs" [d73bdd1f-d0db-435b-89c5-fc48b0ac0590] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003116302s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-049370 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.43s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-7gnsc" [03e6c648-ce26-4886-84b7-b19f162895b8] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003725694s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-049370 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-049370 addons disable yakd --alsologtostderr -v=1: (6.047466701s)
--- PASS: TestAddons/parallel/Yakd (11.05s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.44s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-ddj6n" [dd553d62-6014-4d9a-b40a-e960cd942c31] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003155075s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-049370 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.44s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.07s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-049370
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-049370: (11.820494005s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-049370
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-049370
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-049370
--- PASS: TestAddons/StoppedEnableDisable (12.07s)

                                                
                                    
x
+
TestCertOptions (28.83s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-143521 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-143521 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (26.446857424s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-143521 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-143521 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-143521 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-143521" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-143521
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-143521: (1.83323274s)
--- PASS: TestCertOptions (28.83s)

                                                
                                    
x
+
TestCertExpiration (225.56s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-943568 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-943568 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (24.457399907s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-943568 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-943568 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (18.750770244s)
helpers_test.go:175: Cleaning up "cert-expiration-943568" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-943568
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-943568: (2.352546473s)
--- PASS: TestCertExpiration (225.56s)

                                                
                                    
x
+
TestForceSystemdFlag (26.93s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-071673 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-071673 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (24.278075824s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-071673 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-071673" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-071673
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-071673: (2.395899489s)
--- PASS: TestForceSystemdFlag (26.93s)

                                                
                                    
x
+
TestForceSystemdEnv (26.24s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-884399 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-884399 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (23.854837357s)
helpers_test.go:175: Cleaning up "force-systemd-env-884399" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-884399
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-884399: (2.381917488s)
--- PASS: TestForceSystemdEnv (26.24s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (2.91s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0904 21:52:12.794028  388360 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0904 21:52:12.794197  388360 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
--- PASS: TestKVMDriverInstallOrUpdate (2.91s)

                                                
                                    
x
+
TestErrorSpam/setup (24.18s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-836446 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-836446 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-836446 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-836446 --driver=docker  --container-runtime=crio: (24.175380474s)
--- PASS: TestErrorSpam/setup (24.18s)

                                                
                                    
x
+
TestErrorSpam/start (0.56s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-836446 --log_dir /tmp/nospam-836446 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-836446 --log_dir /tmp/nospam-836446 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-836446 --log_dir /tmp/nospam-836446 start --dry-run
--- PASS: TestErrorSpam/start (0.56s)

                                                
                                    
x
+
TestErrorSpam/status (0.83s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-836446 --log_dir /tmp/nospam-836446 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-836446 --log_dir /tmp/nospam-836446 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-836446 --log_dir /tmp/nospam-836446 status
--- PASS: TestErrorSpam/status (0.83s)

                                                
                                    
x
+
TestErrorSpam/pause (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-836446 --log_dir /tmp/nospam-836446 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-836446 --log_dir /tmp/nospam-836446 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-836446 --log_dir /tmp/nospam-836446 pause
--- PASS: TestErrorSpam/pause (1.48s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-836446 --log_dir /tmp/nospam-836446 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-836446 --log_dir /tmp/nospam-836446 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-836446 --log_dir /tmp/nospam-836446 unpause
--- PASS: TestErrorSpam/unpause (1.63s)

                                                
                                    
x
+
TestErrorSpam/stop (1.35s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-836446 --log_dir /tmp/nospam-836446 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-836446 --log_dir /tmp/nospam-836446 stop: (1.170423457s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-836446 --log_dir /tmp/nospam-836446 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-836446 --log_dir /tmp/nospam-836446 stop
--- PASS: TestErrorSpam/stop (1.35s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21490-384635/.minikube/files/etc/test/nested/copy/388360/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (68.71s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-434682 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0904 21:08:12.074035  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:08:12.080483  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:08:12.091864  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:08:12.113230  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:08:12.154549  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:08:12.235953  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:08:12.397451  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:08:12.719137  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:08:13.361137  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:08:14.642779  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:08:17.204890  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:08:22.326368  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:08:32.568510  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-434682 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m8.712374066s)
--- PASS: TestFunctional/serial/StartWithProxy (68.71s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.51s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0904 21:08:42.027715  388360 config.go:182] Loaded profile config "functional-434682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-434682 --alsologtostderr -v=8
E0904 21:08:53.050220  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-434682 --alsologtostderr -v=8: (27.509743136s)
functional_test.go:678: soft start took 27.510507066s for "functional-434682" cluster.
I0904 21:09:09.537820  388360 config.go:182] Loaded profile config "functional-434682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (27.51s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-434682 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-434682 cache add registry.k8s.io/pause:3.3: (1.051033592s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-434682 /tmp/TestFunctionalserialCacheCmdcacheadd_local1733509004/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 cache add minikube-local-cache-test:functional-434682
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 cache delete minikube-local-cache-test:functional-434682
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-434682
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-434682 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (256.431839ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 kubectl -- --context functional-434682 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-434682 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.9s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-434682 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0904 21:09:34.012930  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-434682 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.899982695s)
functional_test.go:776: restart took 37.900115822s for "functional-434682" cluster.
I0904 21:09:54.121504  388360 config.go:182] Loaded profile config "functional-434682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (37.90s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-434682 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-434682 logs: (1.307713202s)
--- PASS: TestFunctional/serial/LogsCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 logs --file /tmp/TestFunctionalserialLogsFileCmd2018222706/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-434682 logs --file /tmp/TestFunctionalserialLogsFileCmd2018222706/001/logs.txt: (1.304536933s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.47s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-434682 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-434682
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-434682: exit status 115 (318.876738ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31873 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-434682 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.47s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-434682 config get cpus: exit status 14 (66.311932ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-434682 config get cpus: exit status 14 (58.525341ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-434682 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-434682 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (143.966133ms)

                                                
                                                
-- stdout --
	* [functional-434682] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21490
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21490-384635/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-384635/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 21:20:04.666529  441634 out.go:360] Setting OutFile to fd 1 ...
	I0904 21:20:04.666646  441634 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:20:04.666655  441634 out.go:374] Setting ErrFile to fd 2...
	I0904 21:20:04.666658  441634 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:20:04.666862  441634 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-384635/.minikube/bin
	I0904 21:20:04.667404  441634 out.go:368] Setting JSON to false
	I0904 21:20:04.668397  441634 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10954,"bootTime":1757009851,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 21:20:04.668492  441634 start.go:140] virtualization: kvm guest
	I0904 21:20:04.670537  441634 out.go:179] * [functional-434682] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 21:20:04.671948  441634 out.go:179]   - MINIKUBE_LOCATION=21490
	I0904 21:20:04.671966  441634 notify.go:220] Checking for updates...
	I0904 21:20:04.674226  441634 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 21:20:04.675278  441634 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21490-384635/kubeconfig
	I0904 21:20:04.676431  441634 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-384635/.minikube
	I0904 21:20:04.677618  441634 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 21:20:04.678820  441634 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 21:20:04.680335  441634 config.go:182] Loaded profile config "functional-434682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 21:20:04.680745  441634 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 21:20:04.702096  441634 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 21:20:04.702167  441634 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 21:20:04.750556  441634 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-09-04 21:20:04.741241687 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 21:20:04.750676  441634 docker.go:318] overlay module found
	I0904 21:20:04.752545  441634 out.go:179] * Using the docker driver based on existing profile
	I0904 21:20:04.753666  441634 start.go:304] selected driver: docker
	I0904 21:20:04.753679  441634 start.go:918] validating driver "docker" against &{Name:functional-434682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-434682 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 21:20:04.753799  441634 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 21:20:04.755719  441634 out.go:203] 
	W0904 21:20:04.756882  441634 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0904 21:20:04.757979  441634 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-434682 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-434682 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-434682 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (144.871522ms)

                                                
                                                
-- stdout --
	* [functional-434682] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21490
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21490-384635/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-384635/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 21:20:04.993040  441830 out.go:360] Setting OutFile to fd 1 ...
	I0904 21:20:04.993174  441830 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:20:04.993196  441830 out.go:374] Setting ErrFile to fd 2...
	I0904 21:20:04.993204  441830 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:20:04.993516  441830 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-384635/.minikube/bin
	I0904 21:20:04.994106  441830 out.go:368] Setting JSON to false
	I0904 21:20:04.995202  441830 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10954,"bootTime":1757009851,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 21:20:04.995343  441830 start.go:140] virtualization: kvm guest
	I0904 21:20:04.997427  441830 out.go:179] * [functional-434682] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	I0904 21:20:04.999082  441830 out.go:179]   - MINIKUBE_LOCATION=21490
	I0904 21:20:04.999117  441830 notify.go:220] Checking for updates...
	I0904 21:20:05.001703  441830 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 21:20:05.002997  441830 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21490-384635/kubeconfig
	I0904 21:20:05.004192  441830 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-384635/.minikube
	I0904 21:20:05.005525  441830 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 21:20:05.006818  441830 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 21:20:05.008411  441830 config.go:182] Loaded profile config "functional-434682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 21:20:05.008953  441830 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 21:20:05.029798  441830 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 21:20:05.029929  441830 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 21:20:05.076670  441830 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-09-04 21:20:05.067863181 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 21:20:05.076787  441830 docker.go:318] overlay module found
	I0904 21:20:05.079775  441830 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0904 21:20:05.081002  441830 start.go:304] selected driver: docker
	I0904 21:20:05.081019  441830 start.go:918] validating driver "docker" against &{Name:functional-434682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-434682 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 21:20:05.081107  441830 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 21:20:05.083069  441830 out.go:203] 
	W0904 21:20:05.084193  441830 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0904 21:20:05.085390  441830 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 ssh -n functional-434682 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 cp functional-434682:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1235956994/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 ssh -n functional-434682 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 ssh -n functional-434682 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/388360/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 ssh "sudo cat /etc/test/nested/copy/388360/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/388360.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 ssh "sudo cat /etc/ssl/certs/388360.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/388360.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 ssh "sudo cat /usr/share/ca-certificates/388360.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3883602.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 ssh "sudo cat /etc/ssl/certs/3883602.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3883602.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 ssh "sudo cat /usr/share/ca-certificates/3883602.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-434682 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-434682 ssh "sudo systemctl is-active docker": exit status 1 (278.962032ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-434682 ssh "sudo systemctl is-active containerd": exit status 1 (272.615817ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-434682 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-434682
localhost/kicbase/echo-server:functional-434682
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-434682 image ls --format short --alsologtostderr:
I0904 21:20:06.010444  442300 out.go:360] Setting OutFile to fd 1 ...
I0904 21:20:06.010699  442300 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 21:20:06.010709  442300 out.go:374] Setting ErrFile to fd 2...
I0904 21:20:06.010714  442300 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 21:20:06.010892  442300 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-384635/.minikube/bin
I0904 21:20:06.011435  442300 config.go:182] Loaded profile config "functional-434682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 21:20:06.011528  442300 config.go:182] Loaded profile config "functional-434682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 21:20:06.011900  442300 cli_runner.go:164] Run: docker container inspect functional-434682 --format={{.State.Status}}
I0904 21:20:06.029996  442300 ssh_runner.go:195] Run: systemctl --version
I0904 21:20:06.030065  442300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-434682
I0904 21:20:06.047147  442300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33155 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/functional-434682/id_rsa Username:docker}
I0904 21:20:06.129268  442300 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-434682 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ localhost/minikube-local-cache-test     │ functional-434682  │ ae31280b441f9 │ 3.33kB │
│ localhost/my-image                      │ functional-434682  │ 673490dba416b │ 1.47MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.0            │ a0af72f2ec6d6 │ 76MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.0            │ 46169d968e920 │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-proxy              │ v1.34.0            │ df0860106674d │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/kicbase/echo-server           │ functional-434682  │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.0            │ 90550c43ad2bc │ 89.1MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-434682 image ls --format table --alsologtostderr:
I0904 21:20:08.655108  442863 out.go:360] Setting OutFile to fd 1 ...
I0904 21:20:08.655323  442863 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 21:20:08.655336  442863 out.go:374] Setting ErrFile to fd 2...
I0904 21:20:08.655342  442863 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 21:20:08.655615  442863 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-384635/.minikube/bin
I0904 21:20:08.656259  442863 config.go:182] Loaded profile config "functional-434682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 21:20:08.656372  442863 config.go:182] Loaded profile config "functional-434682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 21:20:08.656740  442863 cli_runner.go:164] Run: docker container inspect functional-434682 --format={{.State.Status}}
I0904 21:20:08.675785  442863 ssh_runner.go:195] Run: systemctl --version
I0904 21:20:08.675851  442863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-434682
I0904 21:20:08.694076  442863 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33155 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/functional-434682/id_rsa Username:docker}
I0904 21:20:08.776732  442863 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-434682 image ls --format json --alsologtostderr:
[{"id":"ae31280b441f94b6ba83b03d8969302cad5ae57dacd4f87b778b61d657efc306","repoDigests":["localhost/minikube-local-cache-test@sha256:7ee6e8c75fd4f267d903b163d0e32c43f058693d546cd7bf6c4a21352d620a07"],"repoTags":["localhost/minikube-local-cache-test:functional-434682"],"size":"3330"},{"id":"673490dba416b7dd09f67512331053e6bc000141edde9a9a0e5cdb6ba0488d9b","repoDigests":["localhost/my-image@sha256:ac609be242d432e7b658f6bc6bb112e2fb5ce374adc0f9cf45a45cd960a1ac45"],"repoTags":["localhost/my-image:functional-434682"],"size":"1468194"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59d
d517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140","registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"53844823"},{"id":"45063b57046f21c23dea5e5318fe1f545a94f52f8396fe7fb50bf5d674ff0fb8","repoDigests":["docker.io/library/167d4221b55dc827cb7216189dcbeff175f8daea122391e3434d8f83c1327f37-tmp@sha256:254fd85f867ac782773e8b9f1524dadf4518077fb152b63d266f4ad319267cb6"],"repoTags":[],"size":"1465612"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c02
0289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":["registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86","registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34
.0"],"size":"89050097"},{"id":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6","registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"76004183"},{"id":"df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067","registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"73138071"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a1339822
6c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-434682"],"size":"4943877"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-434682 image ls --format json --alsologtostderr:
I0904 21:20:08.438814  442801 out.go:360] Setting OutFile to fd 1 ...
I0904 21:20:08.439048  442801 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 21:20:08.439057  442801 out.go:374] Setting ErrFile to fd 2...
I0904 21:20:08.439061  442801 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 21:20:08.439265  442801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-384635/.minikube/bin
I0904 21:20:08.439849  442801 config.go:182] Loaded profile config "functional-434682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 21:20:08.439936  442801 config.go:182] Loaded profile config "functional-434682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 21:20:08.440293  442801 cli_runner.go:164] Run: docker container inspect functional-434682 --format={{.State.Status}}
I0904 21:20:08.457448  442801 ssh_runner.go:195] Run: systemctl --version
I0904 21:20:08.457507  442801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-434682
I0904 21:20:08.474723  442801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33155 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/functional-434682/id_rsa Username:docker}
I0904 21:20:08.557596  442801 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-434682 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: ae31280b441f94b6ba83b03d8969302cad5ae57dacd4f87b778b61d657efc306
repoDigests:
- localhost/minikube-local-cache-test@sha256:7ee6e8c75fd4f267d903b163d0e32c43f058693d546cd7bf6c4a21352d620a07
repoTags:
- localhost/minikube-local-cache-test:functional-434682
size: "3330"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: 90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "89050097"
- id: a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "76004183"
- id: 46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "53844823"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-434682
size: "4943877"
- id: df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
- registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "73138071"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-434682 image ls --format yaml --alsologtostderr:
I0904 21:20:06.214174  442350 out.go:360] Setting OutFile to fd 1 ...
I0904 21:20:06.214399  442350 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 21:20:06.214409  442350 out.go:374] Setting ErrFile to fd 2...
I0904 21:20:06.214413  442350 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 21:20:06.214617  442350 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-384635/.minikube/bin
I0904 21:20:06.215144  442350 config.go:182] Loaded profile config "functional-434682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 21:20:06.215267  442350 config.go:182] Loaded profile config "functional-434682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 21:20:06.215655  442350 cli_runner.go:164] Run: docker container inspect functional-434682 --format={{.State.Status}}
I0904 21:20:06.233276  442350 ssh_runner.go:195] Run: systemctl --version
I0904 21:20:06.233328  442350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-434682
I0904 21:20:06.250289  442350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33155 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/functional-434682/id_rsa Username:docker}
I0904 21:20:06.333098  442350 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-434682 ssh pgrep buildkitd: exit status 1 (227.785996ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 image build -t localhost/my-image:functional-434682 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-434682 image build -t localhost/my-image:functional-434682 testdata/build --alsologtostderr: (1.584058466s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-434682 image build -t localhost/my-image:functional-434682 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 45063b57046
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-434682
--> 673490dba41
Successfully tagged localhost/my-image:functional-434682
673490dba416b7dd09f67512331053e6bc000141edde9a9a0e5cdb6ba0488d9b
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-434682 image build -t localhost/my-image:functional-434682 testdata/build --alsologtostderr:
I0904 21:20:06.645940  442494 out.go:360] Setting OutFile to fd 1 ...
I0904 21:20:06.646188  442494 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 21:20:06.646196  442494 out.go:374] Setting ErrFile to fd 2...
I0904 21:20:06.646200  442494 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 21:20:06.646418  442494 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-384635/.minikube/bin
I0904 21:20:06.646987  442494 config.go:182] Loaded profile config "functional-434682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 21:20:06.647745  442494 config.go:182] Loaded profile config "functional-434682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 21:20:06.648181  442494 cli_runner.go:164] Run: docker container inspect functional-434682 --format={{.State.Status}}
I0904 21:20:06.665931  442494 ssh_runner.go:195] Run: systemctl --version
I0904 21:20:06.665975  442494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-434682
I0904 21:20:06.682269  442494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33155 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/functional-434682/id_rsa Username:docker}
I0904 21:20:06.764861  442494 build_images.go:161] Building image from path: /tmp/build.3870301717.tar
I0904 21:20:06.764937  442494 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0904 21:20:06.772979  442494 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3870301717.tar
I0904 21:20:06.776028  442494 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3870301717.tar: stat -c "%s %y" /var/lib/minikube/build/build.3870301717.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3870301717.tar': No such file or directory
I0904 21:20:06.776061  442494 ssh_runner.go:362] scp /tmp/build.3870301717.tar --> /var/lib/minikube/build/build.3870301717.tar (3072 bytes)
I0904 21:20:06.797444  442494 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3870301717
I0904 21:20:06.805087  442494 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3870301717 -xf /var/lib/minikube/build/build.3870301717.tar
I0904 21:20:06.813031  442494 crio.go:315] Building image: /var/lib/minikube/build/build.3870301717
I0904 21:20:06.813113  442494 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-434682 /var/lib/minikube/build/build.3870301717 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0904 21:20:08.156423  442494 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-434682 /var/lib/minikube/build/build.3870301717 --cgroup-manager=cgroupfs: (1.343263702s)
I0904 21:20:08.156483  442494 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3870301717
I0904 21:20:08.164804  442494 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3870301717.tar
I0904 21:20:08.172205  442494 build_images.go:217] Built localhost/my-image:functional-434682 from /tmp/build.3870301717.tar
I0904 21:20:08.172236  442494 build_images.go:133] succeeded building to: functional-434682
I0904 21:20:08.172242  442494 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.359446902s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-434682
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 image load --daemon kicbase/echo-server:functional-434682 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-434682 image load --daemon kicbase/echo-server:functional-434682 --alsologtostderr: (1.170351274s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-434682 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-434682 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-434682 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-434682 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 432251: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-434682 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 image load --daemon kicbase/echo-server:functional-434682 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-434682
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 image load --daemon kicbase/echo-server:functional-434682 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 image save kicbase/echo-server:functional-434682 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 image rm kicbase/echo-server:functional-434682 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-434682
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 image save --daemon kicbase/echo-server:functional-434682 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-434682
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-434682 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "295.971153ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "51.751951ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "297.955869ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "51.68701ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (83.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-434682 /tmp/TestFunctionalparallelMountCmdany-port1577728095/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1757020572394914230" to /tmp/TestFunctionalparallelMountCmdany-port1577728095/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1757020572394914230" to /tmp/TestFunctionalparallelMountCmdany-port1577728095/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1757020572394914230" to /tmp/TestFunctionalparallelMountCmdany-port1577728095/001/test-1757020572394914230
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-434682 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (252.461698ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0904 21:16:12.647668  388360 retry.go:31] will retry after 539.535332ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  4 21:16 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  4 21:16 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  4 21:16 test-1757020572394914230
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 ssh cat /mount-9p/test-1757020572394914230
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-434682 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [13654533-dbfc-4c1a-9418-3706df7e89a8] Pending
helpers_test.go:352: "busybox-mount" [13654533-dbfc-4c1a-9418-3706df7e89a8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [13654533-dbfc-4c1a-9418-3706df7e89a8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [13654533-dbfc-4c1a-9418-3706df7e89a8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 1m21.003703493s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-434682 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-434682 /tmp/TestFunctionalparallelMountCmdany-port1577728095/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (83.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-434682 /tmp/TestFunctionalparallelMountCmdspecific-port2055463540/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-434682 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (246.68226ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0904 21:17:36.127732  388360 retry.go:31] will retry after 494.714248ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-434682 /tmp/TestFunctionalparallelMountCmdspecific-port2055463540/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-434682 ssh "sudo umount -f /mount-9p": exit status 1 (232.46938ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-434682 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-434682 /tmp/TestFunctionalparallelMountCmdspecific-port2055463540/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-434682 /tmp/TestFunctionalparallelMountCmdVerifyCleanup251258155/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-434682 /tmp/TestFunctionalparallelMountCmdVerifyCleanup251258155/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-434682 /tmp/TestFunctionalparallelMountCmdVerifyCleanup251258155/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-434682 ssh "findmnt -T" /mount1: exit status 1 (291.587285ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0904 21:17:37.826893  388360 retry.go:31] will retry after 739.174278ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-434682 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-434682 /tmp/TestFunctionalparallelMountCmdVerifyCleanup251258155/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-434682 /tmp/TestFunctionalparallelMountCmdVerifyCleanup251258155/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-434682 /tmp/TestFunctionalparallelMountCmdVerifyCleanup251258155/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-434682 service list: (1.66872656s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-434682 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-434682 service list -o json: (1.667087133s)
functional_test.go:1504: Took "1.667207147s" to run "out/minikube-linux-amd64 -p functional-434682 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.67s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-434682
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-434682
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-434682
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (179.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0904 21:28:12.066256  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-222298 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m59.147810366s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (179.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-222298 kubectl -- rollout status deployment/busybox: (2.035758776s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 kubectl -- exec busybox-7b57f96db7-6hf9s -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 kubectl -- exec busybox-7b57f96db7-lmgk8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 kubectl -- exec busybox-7b57f96db7-sgmcp -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 kubectl -- exec busybox-7b57f96db7-6hf9s -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 kubectl -- exec busybox-7b57f96db7-lmgk8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 kubectl -- exec busybox-7b57f96db7-sgmcp -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 kubectl -- exec busybox-7b57f96db7-6hf9s -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 kubectl -- exec busybox-7b57f96db7-lmgk8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 kubectl -- exec busybox-7b57f96db7-sgmcp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 kubectl -- exec busybox-7b57f96db7-6hf9s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 kubectl -- exec busybox-7b57f96db7-6hf9s -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 kubectl -- exec busybox-7b57f96db7-lmgk8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 kubectl -- exec busybox-7b57f96db7-lmgk8 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 kubectl -- exec busybox-7b57f96db7-sgmcp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 kubectl -- exec busybox-7b57f96db7-sgmcp -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (54.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-222298 node add --alsologtostderr -v 5: (53.898458231s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (54.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-222298 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 cp testdata/cp-test.txt ha-222298:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 ssh -n ha-222298 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 cp ha-222298:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2628866297/001/cp-test_ha-222298.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 ssh -n ha-222298 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 cp ha-222298:/home/docker/cp-test.txt ha-222298-m02:/home/docker/cp-test_ha-222298_ha-222298-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 ssh -n ha-222298 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 ssh -n ha-222298-m02 "sudo cat /home/docker/cp-test_ha-222298_ha-222298-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 cp ha-222298:/home/docker/cp-test.txt ha-222298-m03:/home/docker/cp-test_ha-222298_ha-222298-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 ssh -n ha-222298 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 ssh -n ha-222298-m03 "sudo cat /home/docker/cp-test_ha-222298_ha-222298-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 cp ha-222298:/home/docker/cp-test.txt ha-222298-m04:/home/docker/cp-test_ha-222298_ha-222298-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 ssh -n ha-222298 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 ssh -n ha-222298-m04 "sudo cat /home/docker/cp-test_ha-222298_ha-222298-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 cp testdata/cp-test.txt ha-222298-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 ssh -n ha-222298-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 cp ha-222298-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2628866297/001/cp-test_ha-222298-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 ssh -n ha-222298-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 cp ha-222298-m02:/home/docker/cp-test.txt ha-222298:/home/docker/cp-test_ha-222298-m02_ha-222298.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 ssh -n ha-222298-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 ssh -n ha-222298 "sudo cat /home/docker/cp-test_ha-222298-m02_ha-222298.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 cp ha-222298-m02:/home/docker/cp-test.txt ha-222298-m03:/home/docker/cp-test_ha-222298-m02_ha-222298-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 ssh -n ha-222298-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 ssh -n ha-222298-m03 "sudo cat /home/docker/cp-test_ha-222298-m02_ha-222298-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 cp ha-222298-m02:/home/docker/cp-test.txt ha-222298-m04:/home/docker/cp-test_ha-222298-m02_ha-222298-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 ssh -n ha-222298-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 ssh -n ha-222298-m04 "sudo cat /home/docker/cp-test_ha-222298-m02_ha-222298-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 cp testdata/cp-test.txt ha-222298-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 ssh -n ha-222298-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 cp ha-222298-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2628866297/001/cp-test_ha-222298-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 ssh -n ha-222298-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 cp ha-222298-m03:/home/docker/cp-test.txt ha-222298:/home/docker/cp-test_ha-222298-m03_ha-222298.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 ssh -n ha-222298-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 ssh -n ha-222298 "sudo cat /home/docker/cp-test_ha-222298-m03_ha-222298.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 cp ha-222298-m03:/home/docker/cp-test.txt ha-222298-m02:/home/docker/cp-test_ha-222298-m03_ha-222298-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 ssh -n ha-222298-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 ssh -n ha-222298-m02 "sudo cat /home/docker/cp-test_ha-222298-m03_ha-222298-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 cp ha-222298-m03:/home/docker/cp-test.txt ha-222298-m04:/home/docker/cp-test_ha-222298-m03_ha-222298-m04.txt
E0904 21:30:02.006001  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:30:02.012378  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:30:02.023704  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:30:02.045029  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 ssh -n ha-222298-m03 "sudo cat /home/docker/cp-test.txt"
E0904 21:30:02.086512  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:30:02.167971  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:30:02.329553  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 ssh -n ha-222298-m04 "sudo cat /home/docker/cp-test_ha-222298-m03_ha-222298-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 cp testdata/cp-test.txt ha-222298-m04:/home/docker/cp-test.txt
E0904 21:30:02.651505  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 ssh -n ha-222298-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 cp ha-222298-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2628866297/001/cp-test_ha-222298-m04.txt
E0904 21:30:03.293158  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 ssh -n ha-222298-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 cp ha-222298-m04:/home/docker/cp-test.txt ha-222298:/home/docker/cp-test_ha-222298-m04_ha-222298.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 ssh -n ha-222298-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 ssh -n ha-222298 "sudo cat /home/docker/cp-test_ha-222298-m04_ha-222298.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 cp ha-222298-m04:/home/docker/cp-test.txt ha-222298-m02:/home/docker/cp-test_ha-222298-m04_ha-222298-m02.txt
E0904 21:30:04.574810  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 ssh -n ha-222298-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 ssh -n ha-222298-m02 "sudo cat /home/docker/cp-test_ha-222298-m04_ha-222298-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 cp ha-222298-m04:/home/docker/cp-test.txt ha-222298-m03:/home/docker/cp-test_ha-222298-m04_ha-222298-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 ssh -n ha-222298-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 ssh -n ha-222298-m03 "sudo cat /home/docker/cp-test_ha-222298-m04_ha-222298-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 node stop m02 --alsologtostderr -v 5
E0904 21:30:07.137125  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:30:12.258399  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-222298 node stop m02 --alsologtostderr -v 5: (11.83169088s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-222298 status --alsologtostderr -v 5: exit status 7 (647.092056ms)

                                                
                                                
-- stdout --
	ha-222298
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-222298-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-222298-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-222298-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 21:30:18.058248  467639 out.go:360] Setting OutFile to fd 1 ...
	I0904 21:30:18.058386  467639 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:30:18.058399  467639 out.go:374] Setting ErrFile to fd 2...
	I0904 21:30:18.058405  467639 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:30:18.058607  467639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-384635/.minikube/bin
	I0904 21:30:18.058776  467639 out.go:368] Setting JSON to false
	I0904 21:30:18.058804  467639 mustload.go:65] Loading cluster: ha-222298
	I0904 21:30:18.058868  467639 notify.go:220] Checking for updates...
	I0904 21:30:18.059183  467639 config.go:182] Loaded profile config "ha-222298": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 21:30:18.059203  467639 status.go:174] checking status of ha-222298 ...
	I0904 21:30:18.059670  467639 cli_runner.go:164] Run: docker container inspect ha-222298 --format={{.State.Status}}
	I0904 21:30:18.078008  467639 status.go:371] ha-222298 host status = "Running" (err=<nil>)
	I0904 21:30:18.078041  467639 host.go:66] Checking if "ha-222298" exists ...
	I0904 21:30:18.078344  467639 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-222298
	I0904 21:30:18.096823  467639 host.go:66] Checking if "ha-222298" exists ...
	I0904 21:30:18.097146  467639 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 21:30:18.097214  467639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-222298
	I0904 21:30:18.114215  467639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33160 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/ha-222298/id_rsa Username:docker}
	I0904 21:30:18.221850  467639 ssh_runner.go:195] Run: systemctl --version
	I0904 21:30:18.225841  467639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 21:30:18.235884  467639 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 21:30:18.284869  467639 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:true NGoroutines:73 SystemTime:2025-09-04 21:30:18.275074917 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 21:30:18.285492  467639 kubeconfig.go:125] found "ha-222298" server: "https://192.168.49.254:8443"
	I0904 21:30:18.285521  467639 api_server.go:166] Checking apiserver status ...
	I0904 21:30:18.285558  467639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 21:30:18.296024  467639 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1513/cgroup
	I0904 21:30:18.304409  467639 api_server.go:182] apiserver freezer: "9:freezer:/docker/62ff8ff6b56db33194264d03ce3b12944a8fcbeb20424c99f7530278c5473405/crio/crio-7b9c4b091c4c4b11abb1463c7cdf6e8fbd12c6cc75006d2c177851198d2dc8f7"
	I0904 21:30:18.304464  467639 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/62ff8ff6b56db33194264d03ce3b12944a8fcbeb20424c99f7530278c5473405/crio/crio-7b9c4b091c4c4b11abb1463c7cdf6e8fbd12c6cc75006d2c177851198d2dc8f7/freezer.state
	I0904 21:30:18.312233  467639 api_server.go:204] freezer state: "THAWED"
	I0904 21:30:18.312256  467639 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0904 21:30:18.316590  467639 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0904 21:30:18.316616  467639 status.go:463] ha-222298 apiserver status = Running (err=<nil>)
	I0904 21:30:18.316629  467639 status.go:176] ha-222298 status: &{Name:ha-222298 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 21:30:18.316644  467639 status.go:174] checking status of ha-222298-m02 ...
	I0904 21:30:18.316999  467639 cli_runner.go:164] Run: docker container inspect ha-222298-m02 --format={{.State.Status}}
	I0904 21:30:18.335248  467639 status.go:371] ha-222298-m02 host status = "Stopped" (err=<nil>)
	I0904 21:30:18.335275  467639 status.go:384] host is not running, skipping remaining checks
	I0904 21:30:18.335291  467639 status.go:176] ha-222298-m02 status: &{Name:ha-222298-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 21:30:18.335330  467639 status.go:174] checking status of ha-222298-m03 ...
	I0904 21:30:18.335628  467639 cli_runner.go:164] Run: docker container inspect ha-222298-m03 --format={{.State.Status}}
	I0904 21:30:18.353672  467639 status.go:371] ha-222298-m03 host status = "Running" (err=<nil>)
	I0904 21:30:18.353696  467639 host.go:66] Checking if "ha-222298-m03" exists ...
	I0904 21:30:18.353960  467639 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-222298-m03
	I0904 21:30:18.373075  467639 host.go:66] Checking if "ha-222298-m03" exists ...
	I0904 21:30:18.373334  467639 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 21:30:18.373381  467639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-222298-m03
	I0904 21:30:18.389984  467639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/ha-222298-m03/id_rsa Username:docker}
	I0904 21:30:18.469800  467639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 21:30:18.480840  467639 kubeconfig.go:125] found "ha-222298" server: "https://192.168.49.254:8443"
	I0904 21:30:18.480867  467639 api_server.go:166] Checking apiserver status ...
	I0904 21:30:18.480897  467639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 21:30:18.490366  467639 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1443/cgroup
	I0904 21:30:18.498377  467639 api_server.go:182] apiserver freezer: "9:freezer:/docker/19e9cd16ed27223769d59b27ec27a9cb2a7808544d2043cb698820ea823e56dc/crio/crio-26cf3767975442d107a672057b620f636745db103e135450a26b2b2223aa31ac"
	I0904 21:30:18.498429  467639 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/19e9cd16ed27223769d59b27ec27a9cb2a7808544d2043cb698820ea823e56dc/crio/crio-26cf3767975442d107a672057b620f636745db103e135450a26b2b2223aa31ac/freezer.state
	I0904 21:30:18.505754  467639 api_server.go:204] freezer state: "THAWED"
	I0904 21:30:18.505775  467639 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0904 21:30:18.509905  467639 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0904 21:30:18.509929  467639 status.go:463] ha-222298-m03 apiserver status = Running (err=<nil>)
	I0904 21:30:18.509939  467639 status.go:176] ha-222298-m03 status: &{Name:ha-222298-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 21:30:18.509954  467639 status.go:174] checking status of ha-222298-m04 ...
	I0904 21:30:18.510176  467639 cli_runner.go:164] Run: docker container inspect ha-222298-m04 --format={{.State.Status}}
	I0904 21:30:18.527779  467639 status.go:371] ha-222298-m04 host status = "Running" (err=<nil>)
	I0904 21:30:18.527832  467639 host.go:66] Checking if "ha-222298-m04" exists ...
	I0904 21:30:18.528120  467639 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-222298-m04
	I0904 21:30:18.544830  467639 host.go:66] Checking if "ha-222298-m04" exists ...
	I0904 21:30:18.545072  467639 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 21:30:18.545109  467639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-222298-m04
	I0904 21:30:18.562408  467639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33175 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/ha-222298-m04/id_rsa Username:docker}
	I0904 21:30:18.645343  467639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 21:30:18.655954  467639 status.go:176] ha-222298-m04 status: &{Name:ha-222298-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (20.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 node start m02 --alsologtostderr -v 5
E0904 21:30:22.500817  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-222298 node start m02 --alsologtostderr -v 5: (20.063602979s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (20.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (120.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 stop --alsologtostderr -v 5
E0904 21:30:42.982953  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-222298 stop --alsologtostderr -v 5: (36.543372338s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 start --wait true --alsologtostderr -v 5
E0904 21:31:23.944926  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-222298 start --wait true --alsologtostderr -v 5: (1m24.237544218s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (120.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 node delete m03 --alsologtostderr -v 5
E0904 21:32:45.866659  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-222298 node delete m03 --alsologtostderr -v 5: (10.48489178s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 stop --alsologtostderr -v 5
E0904 21:33:12.066221  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-222298 stop --alsologtostderr -v 5: (35.416062104s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-222298 status --alsologtostderr -v 5: exit status 7 (100.802954ms)

                                                
                                                
-- stdout --
	ha-222298
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-222298-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-222298-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 21:33:29.226657  484703 out.go:360] Setting OutFile to fd 1 ...
	I0904 21:33:29.226797  484703 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:33:29.226809  484703 out.go:374] Setting ErrFile to fd 2...
	I0904 21:33:29.226814  484703 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:33:29.227018  484703 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-384635/.minikube/bin
	I0904 21:33:29.227210  484703 out.go:368] Setting JSON to false
	I0904 21:33:29.227252  484703 mustload.go:65] Loading cluster: ha-222298
	I0904 21:33:29.227283  484703 notify.go:220] Checking for updates...
	I0904 21:33:29.227610  484703 config.go:182] Loaded profile config "ha-222298": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 21:33:29.227636  484703 status.go:174] checking status of ha-222298 ...
	I0904 21:33:29.228071  484703 cli_runner.go:164] Run: docker container inspect ha-222298 --format={{.State.Status}}
	I0904 21:33:29.247717  484703 status.go:371] ha-222298 host status = "Stopped" (err=<nil>)
	I0904 21:33:29.247754  484703 status.go:384] host is not running, skipping remaining checks
	I0904 21:33:29.247763  484703 status.go:176] ha-222298 status: &{Name:ha-222298 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 21:33:29.247804  484703 status.go:174] checking status of ha-222298-m02 ...
	I0904 21:33:29.248178  484703 cli_runner.go:164] Run: docker container inspect ha-222298-m02 --format={{.State.Status}}
	I0904 21:33:29.264775  484703 status.go:371] ha-222298-m02 host status = "Stopped" (err=<nil>)
	I0904 21:33:29.264813  484703 status.go:384] host is not running, skipping remaining checks
	I0904 21:33:29.264822  484703 status.go:176] ha-222298-m02 status: &{Name:ha-222298-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 21:33:29.264843  484703 status.go:174] checking status of ha-222298-m04 ...
	I0904 21:33:29.265082  484703 cli_runner.go:164] Run: docker container inspect ha-222298-m04 --format={{.State.Status}}
	I0904 21:33:29.281632  484703 status.go:371] ha-222298-m04 host status = "Stopped" (err=<nil>)
	I0904 21:33:29.281652  484703 status.go:384] host is not running, skipping remaining checks
	I0904 21:33:29.281660  484703 status.go:176] ha-222298-m04 status: &{Name:ha-222298-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (59.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-222298 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (58.931374551s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (59.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (67.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 node add --control-plane --alsologtostderr -v 5
E0904 21:35:02.006406  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:35:29.708166  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-222298 node add --control-plane --alsologtostderr -v 5: (1m6.305594464s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-222298 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (67.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.80s)

                                                
                                    
x
+
TestJSONOutput/start/Command (71.43s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-709067 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-709067 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m11.431553805s)
--- PASS: TestJSONOutput/start/Command (71.43s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-709067 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-709067 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.72s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-709067 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-709067 --output=json --user=testUser: (5.722579157s)
--- PASS: TestJSONOutput/stop/Command (5.72s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-736738 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-736738 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (64.743872ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4f4e16af-1e55-46c3-9b9f-de1fb05f761f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-736738] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b8589994-3ff5-411d-800d-f6f99320fd36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21490"}}
	{"specversion":"1.0","id":"b0c53624-ef02-4b81-9f36-5d5b29b21a70","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"412a707e-0e9a-4926-bcbe-36338ba811b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21490-384635/kubeconfig"}}
	{"specversion":"1.0","id":"f2ce340a-e0d9-4d2a-836a-bf3b45de7ec3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-384635/.minikube"}}
	{"specversion":"1.0","id":"539a2096-c47f-4499-8486-ba70163d322a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"6bb9f4b6-4cef-4053-8971-aa5bf9e0bcfe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2b4540e6-0ce4-4d52-b2a9-d7c3525e49ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-736738" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-736738
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (29.95s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-884414 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-884414 --network=: (27.844504732s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-884414" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-884414
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-884414: (2.083861314s)
--- PASS: TestKicCustomNetwork/create_custom_network (29.95s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.71s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-250156 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-250156 --network=bridge: (24.803699005s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-250156" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-250156
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-250156: (1.889848066s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.71s)

                                                
                                    
x
+
TestKicExistingNetwork (23.24s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0904 21:38:04.745787  388360 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0904 21:38:04.761510  388360 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0904 21:38:04.761584  388360 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0904 21:38:04.761602  388360 cli_runner.go:164] Run: docker network inspect existing-network
W0904 21:38:04.778330  388360 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0904 21:38:04.778361  388360 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0904 21:38:04.778378  388360 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0904 21:38:04.778517  388360 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0904 21:38:04.793949  388360 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5502e71d097a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:ef:c1:96:ed:36} reservation:<nil>}
I0904 21:38:04.794405  388360 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dab490}
I0904 21:38:04.794442  388360 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0904 21:38:04.794481  388360 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0904 21:38:04.842169  388360 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-197139 --network=existing-network
E0904 21:38:12.066374  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-197139 --network=existing-network: (21.215956964s)
helpers_test.go:175: Cleaning up "existing-network-197139" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-197139
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-197139: (1.895108262s)
I0904 21:38:27.969531  388360 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (23.24s)

                                                
                                    
x
+
TestKicCustomSubnet (27.12s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-916693 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-916693 --subnet=192.168.60.0/24: (25.060641411s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-916693 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-916693" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-916693
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-916693: (2.037061462s)
--- PASS: TestKicCustomSubnet (27.12s)

                                                
                                    
x
+
TestKicStaticIP (24.12s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-726258 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-726258 --static-ip=192.168.200.200: (21.938264491s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-726258 ip
helpers_test.go:175: Cleaning up "static-ip-726258" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-726258
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-726258: (2.054961018s)
--- PASS: TestKicStaticIP (24.12s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (51.29s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-269733 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-269733 --driver=docker  --container-runtime=crio: (22.365463375s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-290029 --driver=docker  --container-runtime=crio
E0904 21:40:02.012967  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-290029 --driver=docker  --container-runtime=crio: (24.199342919s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-269733
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-290029
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-290029" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-290029
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-290029: (1.797385055s)
helpers_test.go:175: Cleaning up "first-269733" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-269733
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-269733: (1.808377033s)
--- PASS: TestMinikubeProfile (51.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.46s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-525250 --memory=3072 --mount-string /tmp/TestMountStartserial525955977/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-525250 --memory=3072 --mount-string /tmp/TestMountStartserial525955977/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.454745851s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-525250 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.22s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-538641 --memory=3072 --mount-string /tmp/TestMountStartserial525955977/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-538641 --memory=3072 --mount-string /tmp/TestMountStartserial525955977/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.224430305s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-538641 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-525250 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-525250 --alsologtostderr -v=5: (1.585832159s)
--- PASS: TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-538641 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-538641
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-538641: (1.171933612s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.33s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-538641
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-538641: (6.325023537s)
--- PASS: TestMountStart/serial/RestartStopped (7.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-538641 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (125.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-049439 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E0904 21:41:15.140498  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-049439 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m5.122710929s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (125.54s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-049439 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-049439 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-049439 -- rollout status deployment/busybox: (2.479504045s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-049439 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-049439 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-049439 -- exec busybox-7b57f96db7-b8m8j -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-049439 -- exec busybox-7b57f96db7-hr8g4 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-049439 -- exec busybox-7b57f96db7-b8m8j -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-049439 -- exec busybox-7b57f96db7-hr8g4 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-049439 -- exec busybox-7b57f96db7-b8m8j -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-049439 -- exec busybox-7b57f96db7-hr8g4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.90s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-049439 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-049439 -- exec busybox-7b57f96db7-b8m8j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-049439 -- exec busybox-7b57f96db7-b8m8j -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-049439 -- exec busybox-7b57f96db7-hr8g4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-049439 -- exec busybox-7b57f96db7-hr8g4 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (56.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-049439 -v=5 --alsologtostderr
E0904 21:43:12.065685  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-049439 -v=5 --alsologtostderr: (56.340247759s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (56.90s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-049439 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.58s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 cp testdata/cp-test.txt multinode-049439:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 ssh -n multinode-049439 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 cp multinode-049439:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2420995530/001/cp-test_multinode-049439.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 ssh -n multinode-049439 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 cp multinode-049439:/home/docker/cp-test.txt multinode-049439-m02:/home/docker/cp-test_multinode-049439_multinode-049439-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 ssh -n multinode-049439 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 ssh -n multinode-049439-m02 "sudo cat /home/docker/cp-test_multinode-049439_multinode-049439-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 cp multinode-049439:/home/docker/cp-test.txt multinode-049439-m03:/home/docker/cp-test_multinode-049439_multinode-049439-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 ssh -n multinode-049439 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 ssh -n multinode-049439-m03 "sudo cat /home/docker/cp-test_multinode-049439_multinode-049439-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 cp testdata/cp-test.txt multinode-049439-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 ssh -n multinode-049439-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 cp multinode-049439-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2420995530/001/cp-test_multinode-049439-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 ssh -n multinode-049439-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 cp multinode-049439-m02:/home/docker/cp-test.txt multinode-049439:/home/docker/cp-test_multinode-049439-m02_multinode-049439.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 ssh -n multinode-049439-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 ssh -n multinode-049439 "sudo cat /home/docker/cp-test_multinode-049439-m02_multinode-049439.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 cp multinode-049439-m02:/home/docker/cp-test.txt multinode-049439-m03:/home/docker/cp-test_multinode-049439-m02_multinode-049439-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 ssh -n multinode-049439-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 ssh -n multinode-049439-m03 "sudo cat /home/docker/cp-test_multinode-049439-m02_multinode-049439-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 cp testdata/cp-test.txt multinode-049439-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 ssh -n multinode-049439-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 cp multinode-049439-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2420995530/001/cp-test_multinode-049439-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 ssh -n multinode-049439-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 cp multinode-049439-m03:/home/docker/cp-test.txt multinode-049439:/home/docker/cp-test_multinode-049439-m03_multinode-049439.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 ssh -n multinode-049439-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 ssh -n multinode-049439 "sudo cat /home/docker/cp-test_multinode-049439-m03_multinode-049439.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 cp multinode-049439-m03:/home/docker/cp-test.txt multinode-049439-m02:/home/docker/cp-test_multinode-049439-m03_multinode-049439-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 ssh -n multinode-049439-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 ssh -n multinode-049439-m02 "sudo cat /home/docker/cp-test_multinode-049439-m03_multinode-049439-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.51s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-049439 node stop m03: (1.166576115s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-049439 status: exit status 7 (428.854953ms)

                                                
                                                
-- stdout --
	multinode-049439
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-049439-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-049439-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-049439 status --alsologtostderr: exit status 7 (445.461012ms)

                                                
                                                
-- stdout --
	multinode-049439
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-049439-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-049439-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 21:43:51.828526  549536 out.go:360] Setting OutFile to fd 1 ...
	I0904 21:43:51.828632  549536 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:43:51.828640  549536 out.go:374] Setting ErrFile to fd 2...
	I0904 21:43:51.828645  549536 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:43:51.828846  549536 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-384635/.minikube/bin
	I0904 21:43:51.829027  549536 out.go:368] Setting JSON to false
	I0904 21:43:51.829057  549536 mustload.go:65] Loading cluster: multinode-049439
	I0904 21:43:51.829112  549536 notify.go:220] Checking for updates...
	I0904 21:43:51.829496  549536 config.go:182] Loaded profile config "multinode-049439": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 21:43:51.829522  549536 status.go:174] checking status of multinode-049439 ...
	I0904 21:43:51.830110  549536 cli_runner.go:164] Run: docker container inspect multinode-049439 --format={{.State.Status}}
	I0904 21:43:51.848843  549536 status.go:371] multinode-049439 host status = "Running" (err=<nil>)
	I0904 21:43:51.848890  549536 host.go:66] Checking if "multinode-049439" exists ...
	I0904 21:43:51.849269  549536 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-049439
	I0904 21:43:51.866129  549536 host.go:66] Checking if "multinode-049439" exists ...
	I0904 21:43:51.866361  549536 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 21:43:51.866413  549536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-049439
	I0904 21:43:51.882739  549536 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33280 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/multinode-049439/id_rsa Username:docker}
	I0904 21:43:51.969893  549536 ssh_runner.go:195] Run: systemctl --version
	I0904 21:43:51.973923  549536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 21:43:51.984111  549536 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 21:43:52.032392  549536 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:63 SystemTime:2025-09-04 21:43:52.023458876 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 21:43:52.033159  549536 kubeconfig.go:125] found "multinode-049439" server: "https://192.168.67.2:8443"
	I0904 21:43:52.033198  549536 api_server.go:166] Checking apiserver status ...
	I0904 21:43:52.033245  549536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 21:43:52.043389  549536 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1563/cgroup
	I0904 21:43:52.051589  549536 api_server.go:182] apiserver freezer: "9:freezer:/docker/8e9dca85b6e61f106cdec218110775de51311fa978e0a51b923f4bf3725066e7/crio/crio-a8e969b838623ee274e36edf5b23099e83ec16ef838ce25ebc1815ec45c02010"
	I0904 21:43:52.051651  549536 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8e9dca85b6e61f106cdec218110775de51311fa978e0a51b923f4bf3725066e7/crio/crio-a8e969b838623ee274e36edf5b23099e83ec16ef838ce25ebc1815ec45c02010/freezer.state
	I0904 21:43:52.059232  549536 api_server.go:204] freezer state: "THAWED"
	I0904 21:43:52.059255  549536 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0904 21:43:52.063203  549536 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0904 21:43:52.063228  549536 status.go:463] multinode-049439 apiserver status = Running (err=<nil>)
	I0904 21:43:52.063241  549536 status.go:176] multinode-049439 status: &{Name:multinode-049439 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 21:43:52.063261  549536 status.go:174] checking status of multinode-049439-m02 ...
	I0904 21:43:52.063507  549536 cli_runner.go:164] Run: docker container inspect multinode-049439-m02 --format={{.State.Status}}
	I0904 21:43:52.081997  549536 status.go:371] multinode-049439-m02 host status = "Running" (err=<nil>)
	I0904 21:43:52.082022  549536 host.go:66] Checking if "multinode-049439-m02" exists ...
	I0904 21:43:52.082273  549536 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-049439-m02
	I0904 21:43:52.098580  549536 host.go:66] Checking if "multinode-049439-m02" exists ...
	I0904 21:43:52.098859  549536 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 21:43:52.098903  549536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-049439-m02
	I0904 21:43:52.115904  549536 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33285 SSHKeyPath:/home/jenkins/minikube-integration/21490-384635/.minikube/machines/multinode-049439-m02/id_rsa Username:docker}
	I0904 21:43:52.197289  549536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 21:43:52.206987  549536 status.go:176] multinode-049439-m02 status: &{Name:multinode-049439-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0904 21:43:52.207029  549536 status.go:174] checking status of multinode-049439-m03 ...
	I0904 21:43:52.207293  549536 cli_runner.go:164] Run: docker container inspect multinode-049439-m03 --format={{.State.Status}}
	I0904 21:43:52.224001  549536 status.go:371] multinode-049439-m03 host status = "Stopped" (err=<nil>)
	I0904 21:43:52.224021  549536 status.go:384] host is not running, skipping remaining checks
	I0904 21:43:52.224029  549536 status.go:176] multinode-049439-m03 status: &{Name:multinode-049439-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.04s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-049439 node start m03 -v=5 --alsologtostderr: (6.589982322s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.21s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (69.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-049439
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-049439
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-049439: (24.640587738s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-049439 --wait=true -v=5 --alsologtostderr
E0904 21:45:02.005747  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-049439 --wait=true -v=5 --alsologtostderr: (44.527658983s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-049439
--- PASS: TestMultiNode/serial/RestartKeepsNodes (69.27s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-049439 node delete m03: (4.566833492s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-049439 stop: (23.523836776s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-049439 status: exit status 7 (85.430503ms)

                                                
                                                
-- stdout --
	multinode-049439
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-049439-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-049439 status --alsologtostderr: exit status 7 (85.901408ms)

                                                
                                                
-- stdout --
	multinode-049439
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-049439-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 21:45:37.455766  559147 out.go:360] Setting OutFile to fd 1 ...
	I0904 21:45:37.456128  559147 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:45:37.456142  559147 out.go:374] Setting ErrFile to fd 2...
	I0904 21:45:37.456147  559147 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:45:37.456375  559147 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-384635/.minikube/bin
	I0904 21:45:37.456588  559147 out.go:368] Setting JSON to false
	I0904 21:45:37.456619  559147 mustload.go:65] Loading cluster: multinode-049439
	I0904 21:45:37.456679  559147 notify.go:220] Checking for updates...
	I0904 21:45:37.457100  559147 config.go:182] Loaded profile config "multinode-049439": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 21:45:37.457126  559147 status.go:174] checking status of multinode-049439 ...
	I0904 21:45:37.457561  559147 cli_runner.go:164] Run: docker container inspect multinode-049439 --format={{.State.Status}}
	I0904 21:45:37.476014  559147 status.go:371] multinode-049439 host status = "Stopped" (err=<nil>)
	I0904 21:45:37.476038  559147 status.go:384] host is not running, skipping remaining checks
	I0904 21:45:37.476048  559147 status.go:176] multinode-049439 status: &{Name:multinode-049439 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 21:45:37.476091  559147 status.go:174] checking status of multinode-049439-m02 ...
	I0904 21:45:37.476349  559147 cli_runner.go:164] Run: docker container inspect multinode-049439-m02 --format={{.State.Status}}
	I0904 21:45:37.492716  559147 status.go:371] multinode-049439-m02 host status = "Stopped" (err=<nil>)
	I0904 21:45:37.492735  559147 status.go:384] host is not running, skipping remaining checks
	I0904 21:45:37.492741  559147 status.go:176] multinode-049439-m02 status: &{Name:multinode-049439-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.70s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (44.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-049439 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-049439 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (43.843241777s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-049439 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (44.38s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-049439
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-049439-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-049439-m02 --driver=docker  --container-runtime=crio: exit status 14 (66.015747ms)

                                                
                                                
-- stdout --
	* [multinode-049439-m02] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21490
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21490-384635/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-384635/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-049439-m02' is duplicated with machine name 'multinode-049439-m02' in profile 'multinode-049439'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-049439-m03 --driver=docker  --container-runtime=crio
E0904 21:46:25.071994  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-049439-m03 --driver=docker  --container-runtime=crio: (24.451152137s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-049439
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-049439: exit status 80 (259.645485ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-049439 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-049439-m03 already exists in multinode-049439-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-049439-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-049439-m03: (1.793328795s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.62s)

                                                
                                    
x
+
TestPreload (109.99s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-548787 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-548787 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (50.302945293s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-548787 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-548787 image pull gcr.io/k8s-minikube/busybox: (1.182921276s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-548787
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-548787: (5.745970663s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-548787 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0904 21:48:12.065754  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-548787 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (50.324005501s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-548787 image list
helpers_test.go:175: Cleaning up "test-preload-548787" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-548787
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-548787: (2.227638849s)
--- PASS: TestPreload (109.99s)

                                                
                                    
x
+
TestScheduledStopUnix (99.4s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-942313 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-942313 --memory=3072 --driver=docker  --container-runtime=crio: (23.411478058s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-942313 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-942313 -n scheduled-stop-942313
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-942313 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0904 21:49:06.110437  388360 retry.go:31] will retry after 106.797µs: open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/scheduled-stop-942313/pid: no such file or directory
I0904 21:49:06.111610  388360 retry.go:31] will retry after 187.702µs: open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/scheduled-stop-942313/pid: no such file or directory
I0904 21:49:06.112795  388360 retry.go:31] will retry after 145.313µs: open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/scheduled-stop-942313/pid: no such file or directory
I0904 21:49:06.113933  388360 retry.go:31] will retry after 497.368µs: open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/scheduled-stop-942313/pid: no such file or directory
I0904 21:49:06.115076  388360 retry.go:31] will retry after 698.373µs: open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/scheduled-stop-942313/pid: no such file or directory
I0904 21:49:06.116217  388360 retry.go:31] will retry after 704.374µs: open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/scheduled-stop-942313/pid: no such file or directory
I0904 21:49:06.117352  388360 retry.go:31] will retry after 657.597µs: open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/scheduled-stop-942313/pid: no such file or directory
I0904 21:49:06.118514  388360 retry.go:31] will retry after 2.074743ms: open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/scheduled-stop-942313/pid: no such file or directory
I0904 21:49:06.120659  388360 retry.go:31] will retry after 3.086366ms: open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/scheduled-stop-942313/pid: no such file or directory
I0904 21:49:06.123843  388360 retry.go:31] will retry after 4.912123ms: open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/scheduled-stop-942313/pid: no such file or directory
I0904 21:49:06.129056  388360 retry.go:31] will retry after 7.155429ms: open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/scheduled-stop-942313/pid: no such file or directory
I0904 21:49:06.137323  388360 retry.go:31] will retry after 12.093283ms: open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/scheduled-stop-942313/pid: no such file or directory
I0904 21:49:06.149473  388360 retry.go:31] will retry after 7.522832ms: open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/scheduled-stop-942313/pid: no such file or directory
I0904 21:49:06.157688  388360 retry.go:31] will retry after 13.459884ms: open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/scheduled-stop-942313/pid: no such file or directory
I0904 21:49:06.171920  388360 retry.go:31] will retry after 19.897618ms: open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/scheduled-stop-942313/pid: no such file or directory
I0904 21:49:06.192145  388360 retry.go:31] will retry after 35.760443ms: open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/scheduled-stop-942313/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-942313 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-942313 -n scheduled-stop-942313
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-942313
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-942313 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0904 21:50:02.015508  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-942313
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-942313: exit status 7 (69.200663ms)

                                                
                                                
-- stdout --
	scheduled-stop-942313
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-942313 -n scheduled-stop-942313
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-942313 -n scheduled-stop-942313: exit status 7 (67.626906ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-942313" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-942313
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-942313: (4.678986539s)
--- PASS: TestScheduledStopUnix (99.40s)

                                                
                                    
x
+
TestInsufficientStorage (9.8s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-558890 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-558890 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.539770002s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6b9fe2f8-1886-4d87-9b7d-1f16c5071875","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-558890] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"489927df-5ee3-4d45-a584-4f2172ad192b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21490"}}
	{"specversion":"1.0","id":"60d84045-153e-411e-b3b9-d11113e9e2a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0c1988b1-670e-4776-94db-ad716bbee0d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21490-384635/kubeconfig"}}
	{"specversion":"1.0","id":"6ca36e2d-4a3e-4b67-a498-03066c1d3d77","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-384635/.minikube"}}
	{"specversion":"1.0","id":"f885b63e-8759-486b-b231-447c344d9b62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"3c656ef8-5ebd-4e55-b747-2900d7011e02","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"05ee0707-9eaa-4a73-bfff-1d56acc749cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"787ae069-0ca9-45d8-968c-8affc845ae9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"bdcc7ce6-cfc9-435e-ac7b-8de79c2759f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"5bfb29b6-9308-4aa5-9161-ba0b1fbe3539","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"c55a6a92-9eb8-4dab-b8b0-54d02cf6251a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-558890\" primary control-plane node in \"insufficient-storage-558890\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"508b7e86-fcb5-471a-b6a3-195669239bcf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.47-1756116447-21413 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"bdb220ba-c942-4ec1-93cd-b4709bdd1b99","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"4769c743-2f8d-4ca4-909e-99211f817307","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-558890 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-558890 --output=json --layout=cluster: exit status 7 (242.216978ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-558890","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-558890","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0904 21:50:29.485053  581025 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-558890" does not appear in /home/jenkins/minikube-integration/21490-384635/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-558890 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-558890 --output=json --layout=cluster: exit status 7 (242.348724ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-558890","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-558890","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0904 21:50:29.727980  581125 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-558890" does not appear in /home/jenkins/minikube-integration/21490-384635/kubeconfig
	E0904 21:50:29.737399  581125 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/insufficient-storage-558890/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-558890" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-558890
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-558890: (1.777604106s)
--- PASS: TestInsufficientStorage (9.80s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (66.03s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2436934076 start -p running-upgrade-924131 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2436934076 start -p running-upgrade-924131 --memory=3072 --vm-driver=docker  --container-runtime=crio: (46.081170717s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-924131 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-924131 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (17.339382661s)
helpers_test.go:175: Cleaning up "running-upgrade-924131" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-924131
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-924131: (1.958923042s)
--- PASS: TestRunningBinaryUpgrade (66.03s)

                                                
                                    
x
+
TestKubernetesUpgrade (341.17s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-670610 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-670610 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (41.656833399s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-670610
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-670610: (1.236715912s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-670610 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-670610 status --format={{.Host}}: exit status 7 (75.292746ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-670610 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-670610 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m25.112650507s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-670610 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-670610 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-670610 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (72.679752ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-670610] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21490
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21490-384635/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-384635/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-670610
	    minikube start -p kubernetes-upgrade-670610 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6706102 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-670610 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-670610 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-670610 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.530746085s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-670610" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-670610
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-670610: (2.42901592s)
--- PASS: TestKubernetesUpgrade (341.17s)

                                                
                                    
x
+
TestMissingContainerUpgrade (65.29s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.2525990260 start -p missing-upgrade-898318 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.2525990260 start -p missing-upgrade-898318 --memory=3072 --driver=docker  --container-runtime=crio: (23.381255269s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-898318
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-898318
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-898318 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-898318 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (38.700353596s)
helpers_test.go:175: Cleaning up "missing-upgrade-898318" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-898318
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-898318: (1.897699833s)
--- PASS: TestMissingContainerUpgrade (65.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (64.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.708916053 start -p stopped-upgrade-881527 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.708916053 start -p stopped-upgrade-881527 --memory=3072 --vm-driver=docker  --container-runtime=crio: (46.468101126s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.708916053 -p stopped-upgrade-881527 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.708916053 -p stopped-upgrade-881527 stop: (1.205238472s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-881527 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-881527 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (16.762077465s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (64.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-881527
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-881527: (1.037256142s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.04s)

                                                
                                    
x
+
TestPause/serial/Start (74.92s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-088246 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-088246 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m14.916923952s)
--- PASS: TestPause/serial/Start (74.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-280295 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-280295 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (80.935079ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-280295] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21490
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21490-384635/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-384635/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (29.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-280295 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-280295 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (29.129069101s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-280295 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (29.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-364928 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-364928 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (144.274992ms)

                                                
                                                
-- stdout --
	* [false-364928] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21490
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21490-384635/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-384635/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 21:52:05.890212  605730 out.go:360] Setting OutFile to fd 1 ...
	I0904 21:52:05.890514  605730 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:52:05.890526  605730 out.go:374] Setting ErrFile to fd 2...
	I0904 21:52:05.890530  605730 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 21:52:05.890783  605730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21490-384635/.minikube/bin
	I0904 21:52:05.891424  605730 out.go:368] Setting JSON to false
	I0904 21:52:05.892629  605730 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12875,"bootTime":1757009851,"procs":272,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 21:52:05.892688  605730 start.go:140] virtualization: kvm guest
	I0904 21:52:05.894674  605730 out.go:179] * [false-364928] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 21:52:05.895895  605730 out.go:179]   - MINIKUBE_LOCATION=21490
	I0904 21:52:05.896010  605730 notify.go:220] Checking for updates...
	I0904 21:52:05.898486  605730 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 21:52:05.899733  605730 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21490-384635/kubeconfig
	I0904 21:52:05.900941  605730 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21490-384635/.minikube
	I0904 21:52:05.902380  605730 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 21:52:05.903618  605730 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 21:52:05.905184  605730 config.go:182] Loaded profile config "NoKubernetes-280295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 21:52:05.905295  605730 config.go:182] Loaded profile config "kubernetes-upgrade-670610": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 21:52:05.905418  605730 config.go:182] Loaded profile config "pause-088246": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 21:52:05.905539  605730 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 21:52:05.928944  605730 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 21:52:05.929037  605730 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 21:52:05.977289  605730 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:true NGoroutines:75 SystemTime:2025-09-04 21:52:05.967929241 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 21:52:05.977395  605730 docker.go:318] overlay module found
	I0904 21:52:05.979377  605730 out.go:179] * Using the docker driver based on user configuration
	I0904 21:52:05.980392  605730 start.go:304] selected driver: docker
	I0904 21:52:05.980407  605730 start.go:918] validating driver "docker" against <nil>
	I0904 21:52:05.980419  605730 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 21:52:05.982342  605730 out.go:203] 
	W0904 21:52:05.983447  605730 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0904 21:52:05.984464  605730 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-364928 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-364928

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-364928

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-364928

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-364928

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-364928

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-364928

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-364928

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-364928

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-364928

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-364928

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-364928"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-364928"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-364928"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-364928

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-364928"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-364928"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-364928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-364928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-364928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-364928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-364928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-364928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-364928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-364928" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-364928"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-364928"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-364928"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-364928"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-364928"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-364928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-364928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-364928" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-364928"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-364928"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-364928"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-364928"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-364928"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21490-384635/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 04 Sep 2025 21:52:07 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.94.2:8443
name: NoKubernetes-280295
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21490-384635/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 04 Sep 2025 21:51:24 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-670610
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21490-384635/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 04 Sep 2025 21:52:07 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.103.2:8443
name: pause-088246
contexts:
- context:
cluster: NoKubernetes-280295
extensions:
- extension:
last-update: Thu, 04 Sep 2025 21:52:07 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: NoKubernetes-280295
name: NoKubernetes-280295
- context:
cluster: kubernetes-upgrade-670610
user: kubernetes-upgrade-670610
name: kubernetes-upgrade-670610
- context:
cluster: pause-088246
extensions:
- extension:
last-update: Thu, 04 Sep 2025 21:52:07 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: pause-088246
name: pause-088246
current-context: pause-088246
kind: Config
preferences: {}
users:
- name: NoKubernetes-280295
user:
client-certificate: /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/NoKubernetes-280295/client.crt
client-key: /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/NoKubernetes-280295/client.key
- name: kubernetes-upgrade-670610
user:
client-certificate: /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/kubernetes-upgrade-670610/client.crt
client-key: /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/kubernetes-upgrade-670610/client.key
- name: pause-088246
user:
client-certificate: /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/pause-088246/client.crt
client-key: /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/pause-088246/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-364928

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-364928"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-364928"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-364928"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-364928"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-364928"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-364928"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-364928"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-364928"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-364928"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-364928"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-364928"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-364928"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-364928"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-364928"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-364928"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-364928"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-364928"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-364928"

                                                
                                                
----------------------- debugLogs end: false-364928 [took: 3.241080385s] --------------------------------
helpers_test.go:175: Cleaning up "false-364928" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-364928
--- PASS: TestNetworkPlugins/group/false (3.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-280295 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-280295 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (3.542759185s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-280295 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-280295 status -o json: exit status 2 (277.648369ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-280295","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-280295
W0904 21:52:12.827577  388360 install.go:62] docker-machine-driver-kvm2: exit status 1
W0904 21:52:12.827694  388360 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0904 21:52:12.827759  388360 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2098896917/001/docker-machine-driver-kvm2
I0904 21:52:13.260283  388360 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2098896917/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440] Decompressors:map[bz2:0xc0005911d0 gz:0xc0005911d8 tar:0xc000591150 tar.bz2:0xc000591180 tar.gz:0xc000591190 tar.xz:0xc0005911a0 tar.zst:0xc0005911c0 tbz2:0xc000591180 tgz:0xc000591190 txz:0xc0005911a0 tzst:0xc0005911c0 xz:0xc0005911e0 zip:0xc000591210 zst:0xc0005911e8] Getters:map[file:0xc00069e670 http:0xc000508050 https:0xc0005080a0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0904 21:52:13.260346  388360 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2098896917/001/docker-machine-driver-kvm2
I0904 21:52:14.525535  388360 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0904 21:52:14.525646  388360 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0904 21:52:14.559333  388360 install.go:137] /home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0904 21:52:14.559370  388360 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0904 21:52:14.559441  388360 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0904 21:52:14.559479  388360 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2098896917/002/docker-machine-driver-kvm2
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-280295: (1.851276973s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (5.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-280295 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
I0904 21:52:14.721091  388360 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2098896917/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440] Decompressors:map[bz2:0xc0005911d0 gz:0xc0005911d8 tar:0xc000591150 tar.bz2:0xc000591180 tar.gz:0xc000591190 tar.xz:0xc0005911a0 tar.zst:0xc0005911c0 tbz2:0xc000591180 tgz:0xc000591190 txz:0xc0005911a0 tzst:0xc0005911c0 xz:0xc0005911e0 zip:0xc000591210 zst:0xc0005911e8] Getters:map[file:0xc0019fa700 http:0xc0005c6e60 https:0xc0005c6eb0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0904 21:52:14.721159  388360 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2098896917/002/docker-machine-driver-kvm2
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-280295 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (6.702929468s)
--- PASS: TestNoKubernetes/serial/Start (6.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-280295 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-280295 "sudo systemctl is-active --quiet service kubelet": exit status 1 (277.123839ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (1.118302996s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-280295
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-280295: (1.186396965s)
--- PASS: TestNoKubernetes/serial/Stop (1.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-280295 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-280295 --driver=docker  --container-runtime=crio: (6.132013041s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-280295 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-280295 "sudo systemctl is-active --quiet service kubelet": exit status 1 (277.981495ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (17.11s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-088246 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-088246 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (17.094535464s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (17.11s)

                                                
                                    
x
+
TestPause/serial/Pause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-088246 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.74s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-088246 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-088246 --output=json --layout=cluster: exit status 2 (280.866088ms)

                                                
                                                
-- stdout --
	{"Name":"pause-088246","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-088246","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.28s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.61s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-088246 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.61s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.82s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-088246 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.82s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.67s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-088246 --alsologtostderr -v=5
E0904 21:53:12.065845  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-088246 --alsologtostderr -v=5: (2.666025919s)
--- PASS: TestPause/serial/DeletePaused (2.67s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (16.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (16.271442749s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-088246
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-088246: exit status 1 (16.685033ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-088246: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (16.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (54.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-001115 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-001115 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (54.319856311s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (54.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-001115 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [0e73c784-7332-4a11-80b6-69f647f2d224] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [0e73c784-7332-4a11-80b6-69f647f2d224] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003438958s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-001115 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-001115 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-001115 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (61.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-093695 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-093695 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m1.562434428s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (61.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-001115 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-001115 --alsologtostderr -v=3: (13.080043995s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-001115 -n old-k8s-version-001115
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-001115 -n old-k8s-version-001115: exit status 7 (73.590246ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-001115 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (47.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-001115 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E0904 21:55:02.006292  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/functional-434682/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-001115 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (47.190067965s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-001115 -n old-k8s-version-001115
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (47.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-093695 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [1d3d0779-a39a-48a3-9189-5183a74a2d1b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [1d3d0779-a39a-48a3-9189-5183a74a2d1b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004202827s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-093695 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-rwd6f" [c0d64784-0487-4a2a-a31f-5d5657a5ea11] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00359841s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-rwd6f" [c0d64784-0487-4a2a-a31f-5d5657a5ea11] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003798557s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-001115 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-093695 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-093695 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-093695 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-093695 --alsologtostderr -v=3: (12.166274715s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-001115 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-001115 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-001115 -n old-k8s-version-001115
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-001115 -n old-k8s-version-001115: exit status 2 (278.351153ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-001115 -n old-k8s-version-001115
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-001115 -n old-k8s-version-001115: exit status 2 (277.645418ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-001115 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-001115 -n old-k8s-version-001115
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-001115 -n old-k8s-version-001115
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (71.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-656000 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-656000 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m11.374762753s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (71.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-093695 -n no-preload-093695
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-093695 -n no-preload-093695: exit status 7 (74.379508ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-093695 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (49.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-093695 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-093695 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (48.67022794s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-093695 -n no-preload-093695
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (49.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-601847 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-601847 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m12.940752007s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (27.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-057120 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-057120 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (27.548854849s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (27.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-sxc2m" [d918e911-159b-4260-aaf7-f31e5136e166] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003947917s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-sxc2m" [d918e911-159b-4260-aaf7-f31e5136e166] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004271515s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-093695 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-057120 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-057120 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-057120 --alsologtostderr -v=3: (1.187452817s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-057120 -n newest-cni-057120
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-057120 -n newest-cni-057120: exit status 7 (67.303424ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-057120 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (13.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-057120 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-057120 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (13.136368663s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-057120 -n newest-cni-057120
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (13.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-093695 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-093695 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-093695 -n no-preload-093695
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-093695 -n no-preload-093695: exit status 2 (275.283347ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-093695 -n no-preload-093695
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-093695 -n no-preload-093695: exit status 2 (273.277146ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-093695 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-093695 -n no-preload-093695
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-093695 -n no-preload-093695
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-656000 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [7ee7673e-d44c-4298-8cd2-4b4f85a14835] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [7ee7673e-d44c-4298-8cd2-4b4f85a14835] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003554163s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-656000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (72.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-364928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-364928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m12.477786204s)
--- PASS: TestNetworkPlugins/group/auto/Start (72.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-057120 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-057120 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-057120 -n newest-cni-057120
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-057120 -n newest-cni-057120: exit status 2 (306.643867ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-057120 -n newest-cni-057120
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-057120 -n newest-cni-057120: exit status 2 (292.959534ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-057120 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-057120 -n newest-cni-057120
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-057120 -n newest-cni-057120
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-656000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-656000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-656000 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-656000 --alsologtostderr -v=3: (11.905031411s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (73.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-364928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-364928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m13.678472941s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (73.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-656000 -n embed-certs-656000
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-656000 -n embed-certs-656000: exit status 7 (85.038389ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-656000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (51.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-656000 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-656000 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (51.660955553s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-656000 -n embed-certs-656000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (51.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-601847 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [630c1706-495c-425e-a71c-81e3a695cfd1] Pending
helpers_test.go:352: "busybox" [630c1706-495c-425e-a71c-81e3a695cfd1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [630c1706-495c-425e-a71c-81e3a695cfd1] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.004126482s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-601847 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-601847 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-601847 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.075047919s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-601847 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-601847 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-601847 --alsologtostderr -v=3: (12.848819648s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-601847 -n default-k8s-diff-port-601847
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-601847 -n default-k8s-diff-port-601847: exit status 7 (80.722692ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-601847 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-601847 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0904 21:57:55.142363  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:58:12.065780  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/addons-049370/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-601847 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (51.841773052s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-601847 -n default-k8s-diff-port-601847
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-364928 "pgrep -a kubelet"
I0904 21:58:16.411928  388360 config.go:182] Loaded profile config "auto-364928": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-364928 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-q6q2m" [ebb8f598-8451-43ed-8e13-37d5532e819c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-q6q2m" [ebb8f598-8451-43ed-8e13-37d5532e819c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004307691s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-j99lf" [14000f2b-6c8b-4b24-8973-c88eafdc8488] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00379397s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-j99lf" [14000f2b-6c8b-4b24-8973-c88eafdc8488] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002985851s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-656000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-364928 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-364928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-364928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-656000 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-656000 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-656000 -n embed-certs-656000
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-656000 -n embed-certs-656000: exit status 2 (283.437274ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-656000 -n embed-certs-656000
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-656000 -n embed-certs-656000: exit status 2 (283.732165ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-656000 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-656000 -n embed-certs-656000
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-656000 -n embed-certs-656000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-l2lh2" [cc08f562-4127-4667-b896-24bd7a8fcccf] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004601483s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-364928 "pgrep -a kubelet"
I0904 21:58:35.783187  388360 config.go:182] Loaded profile config "kindnet-364928": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-364928 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vdb9v" [f0d5e263-5804-4f72-898f-d368ef563fdf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-vdb9v" [f0d5e263-5804-4f72-898f-d368ef563fdf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.003464514s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (48.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-364928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-364928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (48.525618865s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (48.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-364928 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-364928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-364928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (65.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-364928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0904 21:59:25.636990  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/old-k8s-version-001115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:59:25.643356  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/old-k8s-version-001115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:59:25.654771  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/old-k8s-version-001115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:59:25.676237  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/old-k8s-version-001115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:59:25.717637  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/old-k8s-version-001115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:59:25.799100  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/old-k8s-version-001115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:59:25.960615  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/old-k8s-version-001115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:59:26.282227  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/old-k8s-version-001115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:59:26.924287  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/old-k8s-version-001115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:59:28.205597  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/old-k8s-version-001115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 21:59:30.767469  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/old-k8s-version-001115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-364928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m5.553746456s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (65.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-364928 "pgrep -a kubelet"
I0904 21:59:35.580704  388360 config.go:182] Loaded profile config "custom-flannel-364928": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-364928 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wc7vq" [d89e80e8-dbba-4915-a6c4-b6d1d39f22eb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0904 21:59:35.889677  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/old-k8s-version-001115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-wc7vq" [d89e80e8-dbba-4915-a6c4-b6d1d39f22eb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003793717s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-364928 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-364928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-364928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (61.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-364928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0904 22:00:06.612598  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/old-k8s-version-001115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-364928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m1.66170765s)
--- PASS: TestNetworkPlugins/group/flannel/Start (61.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-364928 "pgrep -a kubelet"
I0904 22:00:13.786118  388360 config.go:182] Loaded profile config "enable-default-cni-364928": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-364928 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5ns69" [a9d4a022-a071-4764-abae-2bb88fb39b8b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5ns69" [a9d4a022-a071-4764-abae-2bb88fb39b8b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.00463993s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-364928 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-364928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-364928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (62.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-364928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0904 22:00:45.809132  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/no-preload-093695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:00:47.574311  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/old-k8s-version-001115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 22:00:56.050589  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/no-preload-093695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-364928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m2.659658947s)
--- PASS: TestNetworkPlugins/group/bridge/Start (62.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-w7sdr" [029897a7-5b83-4cf1-b378-970de9a23754] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003349145s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-364928 "pgrep -a kubelet"
I0904 22:01:12.347660  388360 config.go:182] Loaded profile config "flannel-364928": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-364928 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5lffx" [fe6e8e80-865b-4ba4-aad5-148406385bec] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5lffx" [fe6e8e80-865b-4ba4-aad5-148406385bec] Running
E0904 22:01:16.532455  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/no-preload-093695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.003379183s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-364928 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-364928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-364928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-364928 "pgrep -a kubelet"
I0904 22:01:44.922634  388360 config.go:182] Loaded profile config "bridge-364928": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-364928 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wk869" [99c85e4c-1134-47ed-9307-c28bb1ab5c58] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-wk869" [99c85e4c-1134-47ed-9307-c28bb1ab5c58] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.004004979s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-364928 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-364928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-364928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-601847 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-601847 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-601847 -n default-k8s-diff-port-601847
E0904 22:16:45.084972  388360 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/bridge-364928/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-601847 -n default-k8s-diff-port-601847: exit status 2 (268.237225ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-601847 -n default-k8s-diff-port-601847
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-601847 -n default-k8s-diff-port-601847: exit status 2 (266.338315ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-601847 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-601847 -n default-k8s-diff-port-601847
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-601847 -n default-k8s-diff-port-601847
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.44s)

                                                
                                    

Test skip (27/325)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.26s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-049370 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.26s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-790833" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-790833
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-364928 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-364928

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-364928

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-364928

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-364928

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-364928

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-364928

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-364928

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-364928

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-364928

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-364928

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-364928"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-364928"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-364928"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-364928

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-364928"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-364928"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-364928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-364928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-364928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-364928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-364928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-364928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-364928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-364928" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-364928"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-364928"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-364928"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-364928"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-364928"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-364928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-364928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-364928" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-364928"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-364928"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-364928"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-364928"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-364928"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21490-384635/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 04 Sep 2025 21:51:24 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-670610
contexts:
- context:
cluster: kubernetes-upgrade-670610
user: kubernetes-upgrade-670610
name: kubernetes-upgrade-670610
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-670610
user:
client-certificate: /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/kubernetes-upgrade-670610/client.crt
client-key: /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/kubernetes-upgrade-670610/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-364928

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-364928"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-364928"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-364928"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-364928"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-364928"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-364928"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-364928"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-364928"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-364928"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-364928"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-364928"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-364928"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-364928"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-364928"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-364928"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-364928"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-364928"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-364928"

                                                
                                                
----------------------- debugLogs end: kubenet-364928 [took: 3.12478809s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-364928" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-364928
--- SKIP: TestNetworkPlugins/group/kubenet (3.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-364928 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-364928

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-364928

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-364928

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-364928

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-364928

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-364928

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-364928

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-364928

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-364928

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-364928

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364928"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364928"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364928"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-364928

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364928"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364928"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-364928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-364928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-364928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-364928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-364928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-364928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-364928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-364928" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364928"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364928"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364928"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364928"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364928"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-364928

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-364928

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-364928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-364928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-364928

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-364928

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-364928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-364928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-364928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-364928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-364928" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364928"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364928"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364928"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364928"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364928"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21490-384635/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 04 Sep 2025 21:52:07 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.94.2:8443
name: NoKubernetes-280295
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21490-384635/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 04 Sep 2025 21:51:24 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-670610
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21490-384635/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 04 Sep 2025 21:52:07 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.103.2:8443
name: pause-088246
contexts:
- context:
cluster: NoKubernetes-280295
extensions:
- extension:
last-update: Thu, 04 Sep 2025 21:52:07 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: NoKubernetes-280295
name: NoKubernetes-280295
- context:
cluster: kubernetes-upgrade-670610
user: kubernetes-upgrade-670610
name: kubernetes-upgrade-670610
- context:
cluster: pause-088246
extensions:
- extension:
last-update: Thu, 04 Sep 2025 21:52:07 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: pause-088246
name: pause-088246
current-context: pause-088246
kind: Config
preferences: {}
users:
- name: NoKubernetes-280295
user:
client-certificate: /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/NoKubernetes-280295/client.crt
client-key: /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/NoKubernetes-280295/client.key
- name: kubernetes-upgrade-670610
user:
client-certificate: /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/kubernetes-upgrade-670610/client.crt
client-key: /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/kubernetes-upgrade-670610/client.key
- name: pause-088246
user:
client-certificate: /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/pause-088246/client.crt
client-key: /home/jenkins/minikube-integration/21490-384635/.minikube/profiles/pause-088246/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-364928

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364928"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364928"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364928"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364928"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364928"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364928"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364928"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364928"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364928"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364928"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364928"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364928"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364928"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364928"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364928"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364928"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364928"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-364928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-364928"

                                                
                                                
----------------------- debugLogs end: cilium-364928 [took: 3.261545947s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-364928" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-364928
--- SKIP: TestNetworkPlugins/group/cilium (3.41s)

                                                
                                    
Copied to clipboard